2022-05-24 14:00:37

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 00/96] wireless: cl8k driver for Celeno IEEE 802.11ax devices

From: Viktor Barna <[email protected]>

Celeno Communications publishes to the opensource new wireless driver
for an own 802.11 chipset family - 80xx. The main chip supports multiple
simultaneous bands functioning (2.4G/5.2G or 5.2G/6G) over PCIe 3.0
dual-lane interface. Basically, the chip is dual-band concurrent up to
8x8 in total, and up to 6x6 per band, including 802.11ax 160MHz support
and functioning of AP/STA/MESH modes. The driver architecture is strong
SoftMAC.

The current patchset is the second one and is considered to be published
in form of RFC (Request for Comments, version 2). If there are any
suggestions/propositions - we will be glad to fix them and
eventually share the driver with the community in form of an official patch
(including the firmware binaries).

The RFC is divided into separate patches on a per-file basis to simplify
the review process.

Known issues:
- driver may be configured via config files, that is discouraged by
upstream and may be changed in the future.

Signed-off-by: Aviad Brikman <[email protected]>
Signed-off-by: Eliav Farber <[email protected]>
Signed-off-by: Maksym Kokhan <[email protected]>
Signed-off-by: Oleksandr Savchenko <[email protected]>
Signed-off-by: Shay Bar <[email protected]>
Signed-off-by: Viktor Barna <[email protected]>
---
v2:
- Reduce files amount from 256 to 98 (including 43 source files).
- Fix Kconfig vs code inconsistencies.
- Remove Celeno-specific wrappers like cl_snprintf, cl_timer, string
processors.
- Namespace more functions (with cl_<something>).
- Remove DEV_COREDUMP support (temporary. to minimize size of the RFC).
- Remove CLI handling in the driver (forever, reimplement some of the features
in the debugfs).
- Remove netlink vendor-specific commands.
- Remove debugfs code.
- Fix sparse warnings.
- Fix more checkpatch errors/warnings/checks.
- Update codebase to the most recent internal codebase (as of 20.05.22).
- Adjust patch to support Kernel 5.18-rc7.

v1:
- https://lore.kernel.org/linux-wireless/[email protected]/

Viktor Barna (96):
celeno: add Kconfig
celeno: add Makefile
cl8k: add Kconfig
cl8k: add Makefile
cl8k: add ampdu.c
cl8k: add ampdu.h
cl8k: add bf.c
cl8k: add bf.h
cl8k: add calib.c
cl8k: add calib.h
cl8k: add channel.c
cl8k: add channel.h
cl8k: add chip.c
cl8k: add chip.h
cl8k: add config.c
cl8k: add config.h
cl8k: add debug.c
cl8k: add debug.h
cl8k: add def.h
cl8k: add dfs.c
cl8k: add dfs.h
cl8k: add dsp.c
cl8k: add dsp.h
cl8k: add e2p.c
cl8k: add e2p.h
cl8k: add eeprom.h
cl8k: add ela.c
cl8k: add ela.h
cl8k: add enhanced_tim.c
cl8k: add enhanced_tim.h
cl8k: add fw.c
cl8k: add fw.h
cl8k: add hw.c
cl8k: add hw.h
cl8k: add ipc_shared.h
cl8k: add key.c
cl8k: add key.h
cl8k: add mac80211.c
cl8k: add mac80211.h
cl8k: add mac_addr.c
cl8k: add mac_addr.h
cl8k: add main.c
cl8k: add main.h
cl8k: add maintenance.c
cl8k: add maintenance.h
cl8k: add motion_sense.c
cl8k: add motion_sense.h
cl8k: add pci.c
cl8k: add pci.h
cl8k: add phy.c
cl8k: add phy.h
cl8k: add platform.c
cl8k: add platform.h
cl8k: add power.c
cl8k: add power.h
cl8k: add radio.c
cl8k: add radio.h
cl8k: add rates.c
cl8k: add rates.h
cl8k: add recovery.c
cl8k: add recovery.h
cl8k: add regdom.c
cl8k: add regdom.h
cl8k: add reg/reg_access.h
cl8k: add reg/reg_defs.h
cl8k: add rfic.c
cl8k: add rfic.h
cl8k: add rx.c
cl8k: add rx.h
cl8k: add scan.c
cl8k: add scan.h
cl8k: add sounding.c
cl8k: add sounding.h
cl8k: add sta.c
cl8k: add sta.h
cl8k: add stats.c
cl8k: add stats.h
cl8k: add tcv.c
cl8k: add tcv.h
cl8k: add temperature.c
cl8k: add temperature.h
cl8k: add traffic.c
cl8k: add traffic.h
cl8k: add tx.c
cl8k: add tx.h
cl8k: add utils.c
cl8k: add utils.h
cl8k: add version.c
cl8k: add version.h
cl8k: add vif.c
cl8k: add vif.h
cl8k: add vns.c
cl8k: add vns.h
cl8k: add wrs.c
cl8k: add wrs.h
wireless: add Celeno vendor

drivers/net/wireless/Kconfig | 1 +
drivers/net/wireless/Makefile | 1 +
drivers/net/wireless/celeno/Kconfig | 17 +
drivers/net/wireless/celeno/Makefile | 2 +
drivers/net/wireless/celeno/cl8k/Kconfig | 41 +
drivers/net/wireless/celeno/cl8k/Makefile | 66 +
drivers/net/wireless/celeno/cl8k/ampdu.c | 331 +
drivers/net/wireless/celeno/cl8k/ampdu.h | 39 +
drivers/net/wireless/celeno/cl8k/bf.c | 346 +
drivers/net/wireless/celeno/cl8k/bf.h | 52 +
drivers/net/wireless/celeno/cl8k/calib.c | 2266 ++++
drivers/net/wireless/celeno/cl8k/calib.h | 390 +
drivers/net/wireless/celeno/cl8k/channel.c | 1656 +++
drivers/net/wireless/celeno/cl8k/channel.h | 401 +
drivers/net/wireless/celeno/cl8k/chip.c | 580 +
drivers/net/wireless/celeno/cl8k/chip.h | 182 +
drivers/net/wireless/celeno/cl8k/config.c | 46 +
drivers/net/wireless/celeno/cl8k/config.h | 405 +
drivers/net/wireless/celeno/cl8k/debug.c | 442 +
drivers/net/wireless/celeno/cl8k/debug.h | 160 +
drivers/net/wireless/celeno/cl8k/def.h | 235 +
drivers/net/wireless/celeno/cl8k/dfs.c | 768 ++
drivers/net/wireless/celeno/cl8k/dfs.h | 146 +
drivers/net/wireless/celeno/cl8k/dsp.c | 627 ++
drivers/net/wireless/celeno/cl8k/dsp.h | 27 +
drivers/net/wireless/celeno/cl8k/e2p.c | 771 ++
drivers/net/wireless/celeno/cl8k/e2p.h | 25 +
drivers/net/wireless/celeno/cl8k/eeprom.h | 283 +
drivers/net/wireless/celeno/cl8k/ela.c | 230 +
drivers/net/wireless/celeno/cl8k/ela.h | 48 +
.../net/wireless/celeno/cl8k/enhanced_tim.c | 173 +
.../net/wireless/celeno/cl8k/enhanced_tim.h | 19 +
drivers/net/wireless/celeno/cl8k/fw.c | 3167 ++++++
drivers/net/wireless/celeno/cl8k/fw.h | 1462 +++
drivers/net/wireless/celeno/cl8k/hw.c | 432 +
drivers/net/wireless/celeno/cl8k/hw.h | 280 +
drivers/net/wireless/celeno/cl8k/ipc_shared.h | 1386 +++
drivers/net/wireless/celeno/cl8k/key.c | 382 +
drivers/net/wireless/celeno/cl8k/key.h | 37 +
drivers/net/wireless/celeno/cl8k/mac80211.c | 2392 ++++
drivers/net/wireless/celeno/cl8k/mac80211.h | 197 +
drivers/net/wireless/celeno/cl8k/mac_addr.c | 418 +
drivers/net/wireless/celeno/cl8k/mac_addr.h | 61 +
drivers/net/wireless/celeno/cl8k/main.c | 603 ++
drivers/net/wireless/celeno/cl8k/main.h | 16 +
.../net/wireless/celeno/cl8k/maintenance.c | 81 +
.../net/wireless/celeno/cl8k/maintenance.h | 17 +
.../net/wireless/celeno/cl8k/motion_sense.c | 244 +
.../net/wireless/celeno/cl8k/motion_sense.h | 46 +
drivers/net/wireless/celeno/cl8k/pci.c | 2468 +++++
drivers/net/wireless/celeno/cl8k/pci.h | 194 +
drivers/net/wireless/celeno/cl8k/phy.c | 9648 +++++++++++++++++
drivers/net/wireless/celeno/cl8k/phy.h | 3680 +++++++
drivers/net/wireless/celeno/cl8k/platform.c | 392 +
drivers/net/wireless/celeno/cl8k/platform.h | 196 +
drivers/net/wireless/celeno/cl8k/power.c | 1123 ++
drivers/net/wireless/celeno/cl8k/power.h | 90 +
drivers/net/wireless/celeno/cl8k/radio.c | 1113 ++
drivers/net/wireless/celeno/cl8k/radio.h | 130 +
drivers/net/wireless/celeno/cl8k/rates.c | 1570 +++
drivers/net/wireless/celeno/cl8k/rates.h | 154 +
drivers/net/wireless/celeno/cl8k/recovery.c | 280 +
drivers/net/wireless/celeno/cl8k/recovery.h | 39 +
.../net/wireless/celeno/cl8k/reg/reg_access.h | 199 +
.../net/wireless/celeno/cl8k/reg/reg_defs.h | 5494 ++++++++++
drivers/net/wireless/celeno/cl8k/regdom.c | 301 +
drivers/net/wireless/celeno/cl8k/regdom.h | 11 +
drivers/net/wireless/celeno/cl8k/rfic.c | 232 +
drivers/net/wireless/celeno/cl8k/rfic.h | 29 +
drivers/net/wireless/celeno/cl8k/rx.c | 1845 ++++
drivers/net/wireless/celeno/cl8k/rx.h | 505 +
drivers/net/wireless/celeno/cl8k/scan.c | 392 +
drivers/net/wireless/celeno/cl8k/scan.h | 53 +
drivers/net/wireless/celeno/cl8k/sounding.c | 1121 ++
drivers/net/wireless/celeno/cl8k/sounding.h | 151 +
drivers/net/wireless/celeno/cl8k/sta.c | 507 +
drivers/net/wireless/celeno/cl8k/sta.h | 99 +
drivers/net/wireless/celeno/cl8k/stats.c | 438 +
drivers/net/wireless/celeno/cl8k/stats.h | 108 +
drivers/net/wireless/celeno/cl8k/tcv.c | 1259 +++
drivers/net/wireless/celeno/cl8k/tcv.h | 283 +
.../net/wireless/celeno/cl8k/temperature.c | 634 ++
.../net/wireless/celeno/cl8k/temperature.h | 71 +
drivers/net/wireless/celeno/cl8k/traffic.c | 254 +
drivers/net/wireless/celeno/cl8k/traffic.h | 77 +
drivers/net/wireless/celeno/cl8k/tx.c | 3397 ++++++
drivers/net/wireless/celeno/cl8k/tx.h | 467 +
drivers/net/wireless/celeno/cl8k/utils.c | 642 ++
drivers/net/wireless/celeno/cl8k/utils.h | 185 +
drivers/net/wireless/celeno/cl8k/version.c | 147 +
drivers/net/wireless/celeno/cl8k/version.h | 23 +
drivers/net/wireless/celeno/cl8k/vif.c | 162 +
drivers/net/wireless/celeno/cl8k/vif.h | 81 +
drivers/net/wireless/celeno/cl8k/vns.c | 354 +
drivers/net/wireless/celeno/cl8k/vns.h | 65 +
drivers/net/wireless/celeno/cl8k/wrs.c | 3323 ++++++
drivers/net/wireless/celeno/cl8k/wrs.h | 565 +
97 files changed, 66548 insertions(+)
create mode 100755 drivers/net/wireless/celeno/Kconfig
create mode 100755 drivers/net/wireless/celeno/Makefile
create mode 100644 drivers/net/wireless/celeno/cl8k/Kconfig
create mode 100644 drivers/net/wireless/celeno/cl8k/Makefile
create mode 100644 drivers/net/wireless/celeno/cl8k/ampdu.c
create mode 100644 drivers/net/wireless/celeno/cl8k/ampdu.h
create mode 100644 drivers/net/wireless/celeno/cl8k/bf.c
create mode 100644 drivers/net/wireless/celeno/cl8k/bf.h
create mode 100644 drivers/net/wireless/celeno/cl8k/calib.c
create mode 100644 drivers/net/wireless/celeno/cl8k/calib.h
create mode 100644 drivers/net/wireless/celeno/cl8k/channel.c
create mode 100644 drivers/net/wireless/celeno/cl8k/channel.h
create mode 100644 drivers/net/wireless/celeno/cl8k/chip.c
create mode 100644 drivers/net/wireless/celeno/cl8k/chip.h
create mode 100644 drivers/net/wireless/celeno/cl8k/config.c
create mode 100644 drivers/net/wireless/celeno/cl8k/config.h
create mode 100644 drivers/net/wireless/celeno/cl8k/debug.c
create mode 100644 drivers/net/wireless/celeno/cl8k/debug.h
create mode 100644 drivers/net/wireless/celeno/cl8k/def.h
create mode 100644 drivers/net/wireless/celeno/cl8k/dfs.c
create mode 100644 drivers/net/wireless/celeno/cl8k/dfs.h
create mode 100644 drivers/net/wireless/celeno/cl8k/dsp.c
create mode 100644 drivers/net/wireless/celeno/cl8k/dsp.h
create mode 100644 drivers/net/wireless/celeno/cl8k/e2p.c
create mode 100644 drivers/net/wireless/celeno/cl8k/e2p.h
create mode 100644 drivers/net/wireless/celeno/cl8k/eeprom.h
create mode 100644 drivers/net/wireless/celeno/cl8k/ela.c
create mode 100644 drivers/net/wireless/celeno/cl8k/ela.h
create mode 100644 drivers/net/wireless/celeno/cl8k/enhanced_tim.c
create mode 100644 drivers/net/wireless/celeno/cl8k/enhanced_tim.h
create mode 100644 drivers/net/wireless/celeno/cl8k/fw.c
create mode 100644 drivers/net/wireless/celeno/cl8k/fw.h
create mode 100644 drivers/net/wireless/celeno/cl8k/hw.c
create mode 100644 drivers/net/wireless/celeno/cl8k/hw.h
create mode 100644 drivers/net/wireless/celeno/cl8k/ipc_shared.h
create mode 100644 drivers/net/wireless/celeno/cl8k/key.c
create mode 100644 drivers/net/wireless/celeno/cl8k/key.h
create mode 100644 drivers/net/wireless/celeno/cl8k/mac80211.c
create mode 100644 drivers/net/wireless/celeno/cl8k/mac80211.h
create mode 100644 drivers/net/wireless/celeno/cl8k/mac_addr.c
create mode 100644 drivers/net/wireless/celeno/cl8k/mac_addr.h
create mode 100644 drivers/net/wireless/celeno/cl8k/main.c
create mode 100644 drivers/net/wireless/celeno/cl8k/main.h
create mode 100644 drivers/net/wireless/celeno/cl8k/maintenance.c
create mode 100644 drivers/net/wireless/celeno/cl8k/maintenance.h
create mode 100644 drivers/net/wireless/celeno/cl8k/motion_sense.c
create mode 100644 drivers/net/wireless/celeno/cl8k/motion_sense.h
create mode 100644 drivers/net/wireless/celeno/cl8k/pci.c
create mode 100644 drivers/net/wireless/celeno/cl8k/pci.h
create mode 100644 drivers/net/wireless/celeno/cl8k/phy.c
create mode 100644 drivers/net/wireless/celeno/cl8k/phy.h
create mode 100644 drivers/net/wireless/celeno/cl8k/platform.c
create mode 100644 drivers/net/wireless/celeno/cl8k/platform.h
create mode 100644 drivers/net/wireless/celeno/cl8k/power.c
create mode 100644 drivers/net/wireless/celeno/cl8k/power.h
create mode 100644 drivers/net/wireless/celeno/cl8k/radio.c
create mode 100644 drivers/net/wireless/celeno/cl8k/radio.h
create mode 100644 drivers/net/wireless/celeno/cl8k/rates.c
create mode 100644 drivers/net/wireless/celeno/cl8k/rates.h
create mode 100644 drivers/net/wireless/celeno/cl8k/recovery.c
create mode 100644 drivers/net/wireless/celeno/cl8k/recovery.h
create mode 100644 drivers/net/wireless/celeno/cl8k/reg/reg_access.h
create mode 100644 drivers/net/wireless/celeno/cl8k/reg/reg_defs.h
create mode 100644 drivers/net/wireless/celeno/cl8k/regdom.c
create mode 100644 drivers/net/wireless/celeno/cl8k/regdom.h
create mode 100644 drivers/net/wireless/celeno/cl8k/rfic.c
create mode 100644 drivers/net/wireless/celeno/cl8k/rfic.h
create mode 100644 drivers/net/wireless/celeno/cl8k/rx.c
create mode 100644 drivers/net/wireless/celeno/cl8k/rx.h
create mode 100644 drivers/net/wireless/celeno/cl8k/scan.c
create mode 100644 drivers/net/wireless/celeno/cl8k/scan.h
create mode 100644 drivers/net/wireless/celeno/cl8k/sounding.c
create mode 100644 drivers/net/wireless/celeno/cl8k/sounding.h
create mode 100644 drivers/net/wireless/celeno/cl8k/sta.c
create mode 100644 drivers/net/wireless/celeno/cl8k/sta.h
create mode 100644 drivers/net/wireless/celeno/cl8k/stats.c
create mode 100644 drivers/net/wireless/celeno/cl8k/stats.h
create mode 100644 drivers/net/wireless/celeno/cl8k/tcv.c
create mode 100644 drivers/net/wireless/celeno/cl8k/tcv.h
create mode 100644 drivers/net/wireless/celeno/cl8k/temperature.c
create mode 100644 drivers/net/wireless/celeno/cl8k/temperature.h
create mode 100644 drivers/net/wireless/celeno/cl8k/traffic.c
create mode 100644 drivers/net/wireless/celeno/cl8k/traffic.h
create mode 100644 drivers/net/wireless/celeno/cl8k/tx.c
create mode 100644 drivers/net/wireless/celeno/cl8k/tx.h
create mode 100644 drivers/net/wireless/celeno/cl8k/utils.c
create mode 100644 drivers/net/wireless/celeno/cl8k/utils.h
create mode 100644 drivers/net/wireless/celeno/cl8k/version.c
create mode 100644 drivers/net/wireless/celeno/cl8k/version.h
create mode 100644 drivers/net/wireless/celeno/cl8k/vif.c
create mode 100644 drivers/net/wireless/celeno/cl8k/vif.h
create mode 100644 drivers/net/wireless/celeno/cl8k/vns.c
create mode 100644 drivers/net/wireless/celeno/cl8k/vns.h
create mode 100644 drivers/net/wireless/celeno/cl8k/wrs.c
create mode 100644 drivers/net/wireless/celeno/cl8k/wrs.h

--
2.36.1



2022-05-24 14:09:56

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 79/96] cl8k: add tcv.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/tcv.h | 283 +++++++++++++++++++++++++
1 file changed, 283 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/tcv.h

diff --git a/drivers/net/wireless/celeno/cl8k/tcv.h b/drivers/net/wireless/celeno/cl8k/tcv.h
new file mode 100644
index 000000000000..99cf0938c5c4
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/tcv.h
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_TCV_CONFIG_H
+#define CL_TCV_CONFIG_H
+
+#include "def.h"
+#include "ipc_shared.h"
+#include "radio.h"
+#include "sounding.h"
+#include "eeprom.h"
+
+/**
+ * TCV (=Tranceiver) configuration, is related to the specific band on top
+ * of specific chipset.
+ */
+#define CL_DEFAULT_HAL_IDLE_TIMEOUT 16000 /* Idle request - 16ms */
+#define CL_TX_DEFAULT_AC0_TIMEOUT 500000 /* Background - 500ms */
+#define CL_TX_DEFAULT_AC1_TIMEOUT 300000 /* Best effort - 300ms */
+#define CL_TX_DEFAULT_AC2_TIMEOUT 200000 /* Video - 200ms */
+#define CL_TX_DEFAULT_AC3_TIMEOUT 200000 /* Voice - 200ms */
+#define CL_TX_DEFAULT_BCN_TIMEOUT 150000 /* Beacon - 150ms */
+
+/* Minimal MPDU spacing we support in TX - correspond to FW NX_TX_MPDU_SPACING */
+#define CL_TX_MPDU_SPACING_INVALID 0xFF
+
+enum {
+ CL_RATE_FALLBACK_COUNT_SU,
+ CL_RATE_FALLBACK_COUNT_MU,
+ CL_RATE_FALLBACK_RETRY_COUNT_THR,
+ CL_RATE_FALLBACK_BA_PER_THR,
+ CL_RATE_FALLBACK_BA_NOT_RECEIVED_THR,
+ CL_RATE_FALLBACK_DISABLE_MCS,
+
+ CL_RATE_FALLBACK_MAX,
+};
+
+struct cl_tcv_conf {
+ s8 ce_debug_level;
+ bool ce_radio_on;
+ bool ce_ps_ctrl_enabled;
+ bool ci_ieee80211h;
+ u8 ci_max_bss_num;
+ u8 ci_short_guard_interval;
+ u8 ci_max_mpdu_len;
+ u8 ci_max_ampdu_len_exp;
+ s8 ce_dsp_code[STR_LEN_32B];
+ s8 ce_dsp_data[STR_LEN_32B];
+ s8 ce_dsp_external_data[STR_LEN_32B];
+ bool ci_uapsd_en;
+ bool ce_eirp_regulatory_op_en;
+ bool ce_eirp_regulatory_prod_en;
+ bool ci_agg_tx;
+ bool ci_agg_rx;
+ bool ce_txldpc_en;
+ bool ci_ht_rxldpc_en;
+ bool ci_vht_rxldpc_en;
+ bool ci_he_rxldpc_en;
+ bool ci_cs_required;
+ s8 ci_rx_sensitivity_prod[MAX_ANTENNAS];
+ s8 ci_rx_sensitivity_op[MAX_ANTENNAS];
+ bool ci_min_he_en;
+ u8 ce_cck_tx_ant_mask;
+ u8 ce_cck_rx_ant_mask;
+ u8 ce_rx_nss;
+ u8 ce_tx_nss;
+ u8 ce_num_antennas;
+ u16 ce_max_agg_size_tx;
+ u16 ce_max_agg_size_rx;
+ bool ce_rxamsdu_en;
+ u8 ce_txamsdu_en;
+ u16 ci_tx_amsdu_min_data_rate;
+ u8 ci_tx_sw_amsdu_max_packets;
+ u16 ci_tx_packet_limit;
+ u16 ci_sw_txhdr_pool;
+ u16 ci_amsdu_txhdr_pool;
+ u16 ci_tx_queue_size_agg;
+ u16 ci_tx_queue_size_single;
+ bool ci_tx_push_cntrs_stat_en;
+ bool ci_traffic_mon_en;
+ u16 ci_ipc_rxbuf_size[CL_RX_BUF_MAX];
+ u16 ce_max_retry;
+ u8 ce_short_retry_limit;
+ u8 ce_long_retry_limit;
+ u8 ci_assoc_auth_retry_limit;
+ u8 ci_cap_bandwidth;
+ u32 ci_chandef_channel;
+ u8 ci_chandef_bandwidth;
+ bool ci_cck_in_hw_mode;
+ s8 ce_temp_comp_slope;
+ u32 ci_fw_dbg_severity;
+ u32 ci_fw_dbg_module;
+ u8 ci_lcu_dbg_cfg_inx;
+ u8 ci_dsp_lcu_mode;
+ u32 ci_hal_idle_to;
+ u32 ci_tx_ac0_to;
+ u32 ci_tx_ac1_to;
+ u32 ci_tx_ac2_to;
+ u32 ci_tx_ac3_to;
+ u32 ci_tx_bcn_to;
+ s8 ce_hardware_power_table[STR_LEN_256B];
+ s8 ce_arr_gain[STR_LEN_32B];
+ s8 ce_bf_gain_2_ant[STR_LEN_32B];
+ s8 ce_bf_gain_3_ant[STR_LEN_32B];
+ s8 ce_bf_gain_4_ant[STR_LEN_32B];
+ s8 ce_bf_gain_5_ant[STR_LEN_32B];
+ s8 ce_bf_gain_6_ant[STR_LEN_32B];
+ s8 ce_ant_gain[STR_LEN_32B];
+ s8 ce_ant_gain_36_64[STR_LEN_32B];
+ s8 ce_ant_gain_100_140[STR_LEN_32B];
+ s8 ce_ant_gain_149_165[STR_LEN_32B];
+ s8 ci_min_ant_pwr[STR_LEN_32B];
+ s8 ci_bw_factor[STR_LEN_32B];
+ u8 ce_mcast_rate;
+ bool ce_dyn_mcast_rate_en;
+ bool ce_dyn_bcast_rate_en;
+ u8 ce_default_mcs_ofdm;
+ u8 ce_default_mcs_cck;
+ bool ce_prot_log_nav_en;
+ u8 ce_prot_mode;
+ u8 ce_prot_rate_format;
+ u8 ce_prot_rate_mcs;
+ u8 ce_prot_rate_pre_type;
+ u8 ce_bw_signaling_mode;
+ u8 ci_dyn_cts_sta_thr;
+ s8 ci_vns_pwr_limit;
+ u8 ci_vns_pwr_mode;
+ s8 ci_vns_rssi_auto_resp_thr;
+ s8 ci_vns_rssi_thr;
+ s8 ci_vns_rssi_hys;
+ u16 ci_vns_maintenance_time;
+ u16 ce_bcn_tx_path_min_time;
+ bool ci_backup_bcn_en;
+ bool ce_tx_txop_cut_en;
+ u8 ci_bcns_flushed_cnt_thr;
+ bool ci_phy_err_prevents_phy_dump;
+ u8 ci_tx_rx_delay;
+ u8 ci_fw_assert_time_diff_sec;
+ u8 ci_fw_assert_storm_detect_thd;
+ u32 ce_hw_assert_time_max;
+ u8 ce_bg_assert_print;
+ u8 ce_fw_watchdog_mode;
+ u8 ce_fw_watchdog_limit_count;
+ u32 ce_fw_watchdog_limit_time;
+ s8 ci_rx_remote_cpu_drv;
+ s8 ci_rx_remote_cpu_mac;
+ u16 ci_pending_queue_size;
+ u8 ce_tx_power_control;
+ bool ce_acs_coex_en;
+ u8 ci_dfs_initial_gain;
+ u8 ci_dfs_agc_cd_th;
+ u16 ci_dfs_long_pulse_min;
+ u16 ci_dfs_long_pulse_max;
+ s8 ce_dfs_tbl_overwrite[STR_LEN_64B];
+ /* Power Per MCS values - 6g */
+ s8 ce_ppmcs_offset_he_6g[WRS_MCS_MAX_HE];
+ /* Power Per MCS values - 5g */
+ s8 ce_ppmcs_offset_he_36_64[WRS_MCS_MAX_HE];
+ s8 ce_ppmcs_offset_he_100_140[WRS_MCS_MAX_HE];
+ s8 ce_ppmcs_offset_he_149_165[WRS_MCS_MAX_HE];
+ s8 ce_ppmcs_offset_ht_vht_36_64[WRS_MCS_MAX_VHT];
+ s8 ce_ppmcs_offset_ht_vht_100_140[WRS_MCS_MAX_VHT];
+ s8 ce_ppmcs_offset_ht_vht_149_165[WRS_MCS_MAX_VHT];
+ s8 ce_ppmcs_offset_ofdm_36_64[WRS_MCS_MAX_OFDM];
+ s8 ce_ppmcs_offset_ofdm_100_140[WRS_MCS_MAX_OFDM];
+ s8 ce_ppmcs_offset_ofdm_149_165[WRS_MCS_MAX_OFDM];
+ /* Power Per MCS values - 24g */
+ s8 ce_ppmcs_offset_he[WRS_MCS_MAX_HE];
+ s8 ce_ppmcs_offset_ht[WRS_MCS_MAX_HT];
+ s8 ce_ppmcs_offset_ofdm[WRS_MCS_MAX_OFDM];
+ s8 ce_ppmcs_offset_cck[WRS_MCS_MAX_CCK];
+ /* Power Per BW values - all bands */
+ s8 ce_ppbw_offset[CHNL_BW_MAX];
+ bool ce_power_offset_prod_en;
+ bool ci_bf_en;
+ u8 ci_bf_max_nss;
+ u16 ce_sounding_interval_coefs[SOUNDING_INTERVAL_COEF_MAX];
+ u8 ci_rate_fallback[CL_RATE_FALLBACK_MAX];
+ u16 ce_rx_pkts_budget;
+ u8 ci_band_num;
+ bool ci_mult_ampdu_in_txop_en;
+ u8 ce_wmm_aifsn[AC_MAX];
+ u8 ce_wmm_cwmin[AC_MAX];
+ u8 ce_wmm_cwmax[AC_MAX];
+ u16 ce_wmm_txop[AC_MAX];
+ u8 ci_su_force_min_spacing;
+ u8 ci_mu_force_min_spacing;
+ u8 ci_tf_mac_pad_dur;
+ u32 ci_cca_timeout;
+ u16 ce_tx_ba_session_timeout;
+ bool ci_motion_sense_en;
+ s8 ci_motion_sense_rssi_thr;
+ u8 ci_wrs_max_bw;
+ u8 ci_wrs_min_bw;
+ s8 ci_wrs_fixed_rate[WRS_FIXED_PARAM_MAX];
+ u8 ce_he_mcs_nss_supp_tx[WRS_SS_MAX];
+ u8 ce_he_mcs_nss_supp_rx[WRS_SS_MAX];
+ u8 ce_vht_mcs_nss_supp_tx[WRS_SS_MAX];
+ u8 ce_vht_mcs_nss_supp_rx[WRS_SS_MAX];
+ u8 ci_pe_duration;
+ u8 ci_pe_duration_bcast;
+ u8 ci_gain_update_enable;
+ u8 ci_mcs_sig_b;
+ u8 ci_spp_ksr_value;
+ bool ci_rx_padding_en;
+ bool ci_stats_en;
+ bool ci_bar_disable;
+ bool ci_ofdm_only;
+ bool ci_hw_bsr;
+ bool ci_drop_to_lower_bw;
+ bool ci_force_icmp_single;
+ bool ce_wrs_rx_en;
+ u8 ci_hr_factor[CHNL_BW_MAX];
+ bool ci_csd_en;
+ bool ci_signal_extension_en;
+ bool ci_vht_cap_24g;
+ u32 ci_tx_digital_gain;
+ u32 ci_tx_digital_gain_cck;
+ s8 ci_ofdm_cck_power_offset;
+ bool ci_mac_clk_gating_en;
+ bool ci_phy_clk_gating_en;
+ bool ci_imaging_blocker;
+ u8 ci_sensing_ndp_tx_chain_mask;
+ u8 ci_sensing_ndp_tx_bw;
+ u8 ci_sensing_ndp_tx_format;
+ u8 ci_sensing_ndp_tx_num_ltf;
+ u8 ci_calib_ant_tx[MAX_ANTENNAS];
+ u8 ci_calib_ant_rx[MAX_ANTENNAS];
+ s8 ci_cca_ed_rise_thr_dbm;
+ s8 ci_cca_ed_fall_thr_dbm;
+ u8 ci_cca_cs_en;
+ u8 ci_cca_modem_en;
+ u8 ci_cca_main_ant;
+ u8 ci_cca_second_ant;
+ u8 ci_cca_flag0_ctrl;
+ u8 ci_cca_flag1_ctrl;
+ u8 ci_cca_flag2_ctrl;
+ u8 ci_cca_flag3_ctrl;
+ s8 ci_cca_gi_rise_thr_dbm;
+ s8 ci_cca_gi_fall_thr_dbm;
+ s8 ci_cca_gi_pow_lim_dbm;
+ u16 ci_cca_ed_en;
+ u8 ci_cca_gi_en;
+ bool ci_rx_he_mu_ppdu;
+ bool ci_fast_rx_en;
+ u8 ci_distance_auto_resp_all;
+ u8 ci_distance_auto_resp_msta;
+ bool ci_fw_disable_recovery;
+ bool ce_listener_en;
+ bool ci_tx_delay_tstamp_en;
+ u8 ci_calib_tx_init_tx_gain[MAX_ANTENNAS];
+ u8 ci_calib_tx_init_rx_gain[MAX_ANTENNAS];
+ u8 ci_calib_rx_init_tx_gain[MAX_ANTENNAS];
+ u8 ci_calib_rx_init_rx_gain[MAX_ANTENNAS];
+ u8 ci_calib_conf_rx_gain_upper_limit;
+ u8 ci_calib_conf_rx_gain_lower_limit;
+ u8 ci_calib_conf_tone_vector_20bw[IQ_NUM_TONES_REQ];
+ u8 ci_calib_conf_tone_vector_40bw[IQ_NUM_TONES_REQ];
+ u8 ci_calib_conf_tone_vector_80bw[IQ_NUM_TONES_REQ];
+ u8 ci_calib_conf_tone_vector_160bw[IQ_NUM_TONES_REQ];
+ u32 ci_calib_conf_gp_rad_trshld;
+ u32 ci_calib_conf_ga_lin_upper_trshld;
+ u32 ci_calib_conf_ga_lin_lower_trshld;
+ u8 ci_calib_conf_singletons_num;
+ u16 ci_calib_conf_rampup_time;
+ u16 ci_calib_conf_lo_coarse_step;
+ u16 ci_calib_conf_lo_fine_step;
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ u16 ci_calib_eeprom_channels_20mhz[EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0];
+ u16 ci_calib_eeprom_channels_40mhz[EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV0];
+ u16 ci_calib_eeprom_channels_80mhz[EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV0];
+ u16 ci_calib_eeprom_channels_160mhz[EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV0];
+#endif
+ u16 ci_mesh_basic_rates[MESH_BASIC_RATE_MAX];
+};
+
+int cl_tcv_config_read(struct cl_hw *cl_hw);
+u8 cl_tcv_config_get_num_ap(struct cl_hw *cl_hw);
+int cl_tcv_config_alloc(struct cl_hw *cl_hw);
+void cl_tcv_config_free(struct cl_hw *cl_hw);
+void cl_tcv_config_validate_calib_params(struct cl_hw *cl_hw);
+
+#endif /* CL_TCV_CONFIG_H */
--
2.36.1


2022-05-24 14:10:15

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 36/96] cl8k: add key.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/key.c | 382 +++++++++++++++++++++++++
1 file changed, 382 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/key.c

diff --git a/drivers/net/wireless/celeno/cl8k/key.c b/drivers/net/wireless/celeno/cl8k/key.c
new file mode 100644
index 000000000000..99821b86a795
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/key.c
@@ -0,0 +1,382 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "chip.h"
+#include "phy.h"
+#include "fw.h"
+#include "sta.h"
+#include "enhanced_tim.h"
+#include "key.h"
+
+#define DECRYPT_CCMPSUCCESS_FLAGS (RX_FLAG_DECRYPTED | RX_FLAG_MIC_STRIPPED)
+
+void cl_vif_key_init(struct cl_vif *cl_vif)
+{
+ INIT_LIST_HEAD(&cl_vif->key_list_head);
+}
+
+static struct cl_key_conf *cl_vif_key_find(struct cl_vif *cl_vif,
+ struct ieee80211_key_conf *key_conf_in)
+{
+ struct cl_key_conf *key, *tmp, *out = NULL;
+
+ if (!key_conf_in)
+ return NULL;
+
+ list_for_each_entry_safe(key, tmp, &cl_vif->key_list_head, list) {
+ struct ieee80211_key_conf *key_conf = key->key_conf;
+
+ if (key_conf_in->keyidx != key_conf->keyidx)
+ continue;
+
+ out = key;
+ break;
+ }
+
+ return out;
+}
+
+static struct ieee80211_key_conf *cl_vif_key_conf_default(struct cl_vif *cl_vif)
+{
+ struct cl_key_conf *key, *tmp;
+ struct ieee80211_key_conf *out = NULL;
+
+ list_for_each_entry_safe(key, tmp, &cl_vif->key_list_head, list) {
+ if (key->key_conf->keyidx != cl_vif->key_idx_default)
+ continue;
+
+ out = key->key_conf;
+ break;
+ }
+
+ return out;
+}
+
+static int cl_vif_key_add(struct cl_vif *cl_vif, struct ieee80211_key_conf *key_conf)
+{
+ struct cl_key_conf *key = NULL, *old_key = NULL;
+
+ key = kzalloc(sizeof(*key), GFP_KERNEL);
+ if (!key)
+ return -ENOMEM;
+
+ if (!list_empty(&cl_vif->key_list_head))
+ old_key = list_first_entry(&cl_vif->key_list_head, struct cl_key_conf, list);
+
+ cl_vif->key_idx_default = old_key ? old_key->key_conf->keyidx : key_conf->keyidx;
+ key->key_conf = key_conf;
+ list_add_tail(&key->list, &cl_vif->key_list_head);
+
+ return 0;
+}
+
+static void cl_vif_key_del(struct cl_vif *cl_vif, struct ieee80211_key_conf *key_conf_in)
+{
+ struct cl_key_conf *key, *tmp;
+
+ if (!key_conf_in)
+ return;
+
+ list_for_each_entry_safe(key, tmp, &cl_vif->key_list_head, list) {
+ struct ieee80211_key_conf *key_conf = key->key_conf;
+
+ if (key_conf_in->keyidx != key_conf->keyidx)
+ continue;
+
+ list_del(&key->list);
+ kfree(key);
+ }
+
+ if (!list_empty(&cl_vif->key_list_head)) {
+ struct cl_key_conf *new_key = list_first_entry(&cl_vif->key_list_head,
+ struct cl_key_conf, list);
+
+ cl_vif->key_idx_default = new_key->key_conf->keyidx;
+ }
+}
+
+static int cl_vif_key_check_and_add(struct cl_vif *cl_vif,
+ struct ieee80211_key_conf *key_conf)
+{
+ struct cl_key_conf *key = cl_vif_key_find(cl_vif, key_conf);
+
+ if (key) {
+ cl_dbg_warn(cl_vif->cl_hw,
+ "[%s] error: previous key found. delete old key and add new key\n",
+ __func__);
+ cl_vif_key_del(cl_vif, key->key_conf);
+ }
+
+ return cl_vif_key_add(cl_vif, key_conf);
+}
+
+static inline void cl_ccmp_hdr2pn(u8 *pn, u8 *hdr)
+{
+ pn[0] = hdr[7];
+ pn[1] = hdr[6];
+ pn[2] = hdr[5];
+ pn[3] = hdr[4];
+ pn[4] = hdr[1];
+ pn[5] = hdr[0];
+}
+
+static int cl_key_validate_pn(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ int hdrlen = 0, res = 0;
+ u8 pn[IEEE80211_CCMP_PN_LEN];
+ u8 tid = 0;
+
+ hdrlen = ieee80211_hdrlen(hdr->frame_control);
+ tid = ieee80211_get_tid(hdr);
+
+ cl_ccmp_hdr2pn(pn, skb->data + hdrlen);
+ res = memcmp(pn, cl_sta->rx_pn[tid], IEEE80211_CCMP_PN_LEN);
+ if (res < 0) {
+ cl_hw->rx_info.pkt_drop_invalid_pn++;
+ return -1;
+ }
+
+ memcpy(cl_sta->rx_pn[tid], pn, IEEE80211_CCMP_PN_LEN);
+
+ return 0;
+}
+
+void cl_vif_key_deinit(struct cl_vif *cl_vif)
+{
+ struct cl_key_conf *key, *tmp;
+
+ list_for_each_entry_safe(key, tmp, &cl_vif->key_list_head, list) {
+ list_del(&key->list);
+ kfree(key);
+ }
+}
+
+static int cl_cmd_set_key(struct cl_hw *cl_hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
+{
+ int error = 0;
+ struct mm_key_add_cfm *key_add_cfm;
+ u8 cipher_suite = 0;
+
+ /* Retrieve the cipher suite selector */
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_WEP40:
+ cipher_suite = MAC_CIPHER_SUITE_WEP40;
+ break;
+ case WLAN_CIPHER_SUITE_WEP104:
+ cipher_suite = MAC_CIPHER_SUITE_WEP104;
+ break;
+ case WLAN_CIPHER_SUITE_TKIP:
+ cipher_suite = MAC_CIPHER_SUITE_TKIP;
+ break;
+ case WLAN_CIPHER_SUITE_CCMP:
+ cipher_suite = MAC_CIPHER_SUITE_CCMP;
+ break;
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ cipher_suite = MAC_CIPHER_SUITE_GCMP;
+ break;
+ case WLAN_CIPHER_SUITE_AES_CMAC:
+ return -EOPNOTSUPP;
+ default:
+ return -EINVAL;
+ }
+
+ error = cl_msg_tx_key_add(cl_hw, vif, sta, key, cipher_suite);
+ if (error)
+ return error;
+
+ key_add_cfm = (struct mm_key_add_cfm *)(cl_hw->msg_cfm_params[MM_KEY_ADD_CFM]);
+ if (!key_add_cfm)
+ return -ENOMSG;
+
+ if (key_add_cfm->status != 0) {
+ cl_dbg_verbose(cl_hw, "Status Error (%u)\n", key_add_cfm->status);
+ cl_msg_tx_free_cfm_params(cl_hw, MM_KEY_ADD_CFM);
+ return -EIO;
+ }
+
+ /* Save the index retrieved from firmware */
+ key->hw_key_idx = key_add_cfm->hw_key_idx;
+
+ cl_msg_tx_free_cfm_params(cl_hw, MM_KEY_ADD_CFM);
+
+ /*
+ * Now inform mac80211 about our choices regarding header fields generation:
+ * we let mac80211 take care of all generations
+ */
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV;
+ if (key->cipher == WLAN_CIPHER_SUITE_TKIP)
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIC;
+
+ if (sta) {
+ struct cl_sta *cl_sta = (struct cl_sta *)sta->drv_priv;
+
+ cl_sta->key_conf = key;
+ } else {
+ error = cl_vif_key_check_and_add((struct cl_vif *)vif->drv_priv, key);
+ }
+
+ return error;
+}
+
+static int cl_cmd_disable_key(struct cl_hw *cl_hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
+{
+ int ret = 0;
+ struct cl_sta *cl_sta = NULL;
+ struct cl_tx_queue *tx_queue = &cl_hw->tx_queues->single[HIGH_PRIORITY_QUEUE];
+
+ if (sta) {
+ cl_sta = (struct cl_sta *)sta->drv_priv;
+
+ cl_sta->key_conf = NULL;
+ cl_sta->key_disable = true;
+
+ /*
+ * Make sure there aren't any packets in firmware before deleting the key,
+ * otherwise they will be transmitted without encryption.
+ */
+ cl_sta->stop_tx = true;
+ cl_single_cfm_clear_tim_bit_sta(cl_hw, cl_sta->sta_idx);
+ cl_agg_cfm_clear_tim_bit_sta(cl_hw, cl_sta);
+ cl_txq_flush_sta(cl_hw, cl_sta);
+ cl_single_cfm_poll_empty_sta(cl_hw, cl_sta->sta_idx);
+ cl_agg_cfm_poll_empty_sta(cl_hw, cl_sta);
+
+ if (!list_empty(&tx_queue->hdrs)) {
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ cl_enhanced_tim_set_tx_single(cl_hw, HIGH_PRIORITY_QUEUE,
+ tx_queue->hw_index,
+ false, tx_queue->cl_sta,
+ tx_queue->tid);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+ }
+ } else {
+ cl_vif_key_del((struct cl_vif *)vif->drv_priv, key);
+ }
+
+ ret = cl_msg_tx_key_del(cl_hw, key->hw_key_idx);
+
+ if (cl_sta)
+ cl_sta->stop_tx = false;
+
+ return ret;
+}
+
+int cl_key_set(struct cl_hw *cl_hw,
+ enum set_key_cmd cmd,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
+{
+ int error = 0;
+
+ switch (cmd) {
+ case SET_KEY:
+ error = cl_cmd_set_key(cl_hw, vif, sta, key);
+ break;
+
+ case DISABLE_KEY:
+ error = cl_cmd_disable_key(cl_hw, vif, sta, key);
+ break;
+
+ default:
+ error = -EINVAL;
+ break;
+ }
+
+ return error;
+}
+
+struct ieee80211_key_conf *cl_key_get(struct cl_sta *cl_sta)
+{
+ if (cl_sta->key_conf)
+ return cl_sta->key_conf;
+
+ if (cl_sta->cl_vif)
+ return cl_vif_key_conf_default(cl_sta->cl_vif);
+
+ return NULL;
+}
+
+bool cl_key_is_cipher_ccmp_gcmp(struct ieee80211_key_conf *keyconf)
+{
+ u32 cipher;
+
+ if (!keyconf)
+ return false;
+
+ cipher = keyconf->cipher;
+
+ return ((cipher == WLAN_CIPHER_SUITE_CCMP) ||
+ (cipher == WLAN_CIPHER_SUITE_GCMP) ||
+ (cipher == WLAN_CIPHER_SUITE_GCMP_256));
+}
+
+void cl_key_ccmp_gcmp_pn_to_hdr(u8 *hdr, u64 pn, int key_id)
+{
+ hdr[0] = pn;
+ hdr[1] = pn >> 8;
+ hdr[2] = 0;
+ hdr[3] = 0x20 | (key_id << 6);
+ hdr[4] = pn >> 16;
+ hdr[5] = pn >> 24;
+ hdr[6] = pn >> 32;
+ hdr[7] = pn >> 40;
+}
+
+u8 cl_key_get_cipher_len(struct sk_buff *skb)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_key_conf *key_conf = tx_info->control.hw_key;
+
+ if (key_conf) {
+ switch (key_conf->cipher) {
+ case WLAN_CIPHER_SUITE_WEP40:
+ case WLAN_CIPHER_SUITE_WEP104:
+ return IEEE80211_WEP_IV_LEN;
+ case WLAN_CIPHER_SUITE_TKIP:
+ return IEEE80211_TKIP_IV_LEN;
+ case WLAN_CIPHER_SUITE_CCMP:
+ return IEEE80211_CCMP_HDR_LEN;
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ return IEEE80211_CCMP_256_HDR_LEN;
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ return IEEE80211_GCMP_HDR_LEN;
+ }
+ }
+
+ return 0;
+}
+
+int cl_key_handle_pn_validation(struct cl_hw *cl_hw, struct sk_buff *skb,
+ struct cl_sta *cl_sta)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+
+ if (!ieee80211_is_data(hdr->frame_control) ||
+ ieee80211_is_frag(hdr))
+ return CL_PN_VALID_STATE_NOT_NEEDED;
+
+ if (!(status->flag & DECRYPT_CCMPSUCCESS_FLAGS))
+ return CL_PN_VALID_STATE_NOT_NEEDED;
+
+ if (!cl_sta)
+ return CL_PN_VALID_STATE_NOT_NEEDED;
+
+ if (cl_key_validate_pn(cl_hw, cl_sta, skb))
+ return CL_PN_VALID_STATE_FAILED;
+
+ status = IEEE80211_SKB_RXCB(skb);
+ status->flag |= RX_FLAG_PN_VALIDATED;
+
+ return CL_PN_VALID_STATE_SUCCESS;
+}
--
2.36.1


2022-05-24 14:25:20

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 41/96] cl8k: add mac_addr.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/mac_addr.h | 61 +++++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/mac_addr.h

diff --git a/drivers/net/wireless/celeno/cl8k/mac_addr.h b/drivers/net/wireless/celeno/cl8k/mac_addr.h
new file mode 100644
index 000000000000..3f916f2b7f7b
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/mac_addr.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_MAC_ADDR_H
+#define CL_MAC_ADDR_H
+
+#include "hw.h"
+
+int cl_mac_addr_set(struct cl_hw *cl_hw);
+
+static inline void cl_mac_addr_copy(u8 *dest_addr, const u8 *src_addr)
+{
+ memcpy(dest_addr, src_addr, ETH_ALEN);
+}
+
+static inline bool cl_mac_addr_compare(const u8 *addr1, const u8 *addr2)
+{
+ return !memcmp(addr1, addr2, ETH_ALEN);
+}
+
+static inline bool cl_mac_addr_is_zero(const u8 *addr)
+{
+ const u8 addr_zero[ETH_ALEN] = {0};
+
+ return !memcmp(addr, addr_zero, ETH_ALEN);
+}
+
+static inline bool cl_mac_addr_is_broadcast(const u8 *addr)
+{
+ const u8 addr_bcast[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
+
+ return !memcmp(addr, addr_bcast, ETH_ALEN);
+}
+
+static inline void cl_mac_addr_array_to_nxmac(u8 *array, u32 *low, u32 *high)
+{
+ /* Convert mac address (in a form of array) to a C nxmac form.
+ * Input: array - MAC address
+ * Output: low - array[0..3], high - array[4..5]
+ */
+ u8 i;
+
+ for (i = 0; i < 4; i++)
+ *low |= (u32)(((u32)array[i]) << (i * 8));
+
+ for (i = 0; i < 2; i++)
+ *high |= (u32)(((u32)array[i + 4]) << (i * 8));
+}
+
+static inline u8 cl_mac_addr_find_idx(struct cl_hw *cl_hw, u8 *addr)
+{
+ u8 i;
+
+ for (i = 0; i < cl_hw->n_addresses; i++)
+ if (cl_mac_addr_compare(cl_hw->addresses[i].addr, addr))
+ return i;
+
+ return BSS_INVALID_IDX;
+}
+
+#endif /* CL_MAC_ADDR_H */
--
2.36.1


2022-05-24 14:31:26

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 26/96] cl8k: add eeprom.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/eeprom.h | 283 ++++++++++++++++++++++
1 file changed, 283 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/eeprom.h

diff --git a/drivers/net/wireless/celeno/cl8k/eeprom.h b/drivers/net/wireless/celeno/cl8k/eeprom.h
new file mode 100644
index 000000000000..2680af90484b
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/eeprom.h
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_EEPROM_H
+#define CL_EEPROM_H
+
+#include <linux/kernel.h>
+
+#include "def.h"
+#include "phy.h"
+#include "calib.h"
+
+#define SERIAL_NUMBER_SIZE 32
+#define BIT_MAP_SIZE 20
+#define EXT_BIT_MAP_SIZE (BIT_MAP_SIZE * 2)
+#define NUM_OF_PIVOTS 20
+#define NUM_PIVOT_PHYS (MAX_ANTENNAS * NUM_OF_PIVOTS)
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+#define BIT_MAP_SIZE_20MHZ_TCV0 9
+#define BIT_MAP_SIZE_20MHZ_TCV1 6
+#define BIT_MAP_SIZE_40MHZ_TCV0 4
+#define BIT_MAP_SIZE_40MHZ_TCV1 4
+#define BIT_MAP_SIZE_80MHZ_TCV0 2
+#define BIT_MAP_SIZE_80MHZ_TCV1 2
+#define BIT_MAP_SIZE_160MHZ_TCV0 1
+#define BIT_MAP_SIZE_160MHZ_TCV1 3
+
+#define EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0 10
+#define EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1 7
+#define EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV0 9
+#define EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV1 7
+#define EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV0 8
+#define EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV1 6
+#define EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV0 6
+#define EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV1 2
+#endif
+
+struct eeprom_hw {
+ u8 reserved[96];
+} __packed;
+
+struct eeprom_general {
+ u8 version;
+ u8 flavor;
+ u8 mac_address[6];
+ u8 temp_diff; /* Default value TEMP_DIFF_INVALID = 0x7F */
+ u8 serial_number[SERIAL_NUMBER_SIZE];
+ u8 pwr_table_id[2];
+ u8 reserved[53];
+} __packed;
+
+struct eeprom_fem {
+ u8 wiring_id;
+ u16 fem_lut[FEM_TYPE_MAX];
+ u32 platform_id;
+ u8 reserved[19];
+} __packed;
+
+struct eeprom_phy_calib {
+ s8 pow;
+ s8 offset;
+ s8 tmp;
+} __packed;
+
+struct point {
+ u8 chan;
+ u8 phy;
+ u8 idx;
+ u16 addr;
+ struct eeprom_phy_calib calib;
+} __packed;
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+struct iq {
+ __le32 coef0;
+ __le32 coef1;
+ __le32 coef2;
+ __le32 gain;
+} __packed;
+
+struct score {
+ s8 iq_tx_score;
+ s8 iq_tx_worst_score;
+ s8 iq_rx_score;
+ s8 iq_rx_worst_score;
+ s16 dcoc_i_mv[DCOC_LNA_GAIN_NUM];
+ s16 dcoc_q_mv[DCOC_LNA_GAIN_NUM];
+ s32 lolc_score;
+} __packed;
+
+struct eeprom_calib_data {
+ u8 valid;
+ u8 temperature;
+ u32 lolc[MAX_ANTENNAS];
+ struct cl_dcoc_calib dcoc[MAX_ANTENNAS][DCOC_LNA_GAIN_NUM];
+ struct iq iq_tx[MAX_ANTENNAS];
+ struct iq iq_rx[MAX_ANTENNAS];
+ struct score score[MAX_ANTENNAS];
+} __packed;
+#endif
+
+struct eeprom_calib_power {
+ u16 freq_offset;
+ u8 chan_bmp[BIT_MAP_SIZE];
+ struct eeprom_phy_calib phy_calib[NUM_PIVOT_PHYS];
+} __packed;
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+struct eeprom_calib_iq_dcoc {
+ u8 calib_version;
+ u8 chan_20mhz_bmp_tcv0[BIT_MAP_SIZE_20MHZ_TCV0];
+ u8 chan_20mhz_bmp_tcv1[BIT_MAP_SIZE_20MHZ_TCV1];
+ u8 chan_40mhz_bmp_tcv0[BIT_MAP_SIZE_40MHZ_TCV0];
+ u8 chan_40mhz_bmp_tcv1[BIT_MAP_SIZE_40MHZ_TCV1];
+ u8 chan_80mhz_bmp_tcv0[BIT_MAP_SIZE_80MHZ_TCV0];
+ u8 chan_80mhz_bmp_tcv1[BIT_MAP_SIZE_80MHZ_TCV1];
+ u8 chan_160mhz_bmp_tcv0[BIT_MAP_SIZE_160MHZ_TCV0];
+ u8 chan_160mhz_bmp_tcv1[BIT_MAP_SIZE_160MHZ_TCV1];
+ struct eeprom_calib_data
+ calib_20_data_tcv0[EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0];
+ struct eeprom_calib_data
+ calib_20_data_tcv1[EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1];
+ struct eeprom_calib_data
+ calib_40_data_tcv0[EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV0];
+ struct eeprom_calib_data
+ calib_40_data_tcv1[EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV1];
+ struct eeprom_calib_data
+ calib_80_data_tcv0[EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV0];
+ struct eeprom_calib_data
+ calib_80_data_tcv1[EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV1];
+ struct eeprom_calib_data
+ calib_160_data_tcv0[EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV0];
+ struct eeprom_calib_data
+ calib_160_data_tcv1[EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV1];
+} __packed;
+#endif
+
+struct eeprom {
+ struct eeprom_hw hw;
+ struct eeprom_general general;
+ struct eeprom_fem fem;
+ struct eeprom_calib_power calib_power;
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ struct eeprom_calib_iq_dcoc calib_iq_dcoc;
+#endif
+} __packed;
+
+enum {
+ ADDR_HW = offsetof(struct eeprom, hw),
+ ADDR_HW_RESERVED = ADDR_HW + offsetof(struct eeprom_hw, reserved),
+
+ ADDR_GEN = offsetof(struct eeprom, general),
+ ADDR_GEN_VERSION = ADDR_GEN + offsetof(struct eeprom_general, version),
+ ADDR_GEN_FLAVOR = ADDR_GEN + offsetof(struct eeprom_general, flavor),
+ ADDR_GEN_MAC_ADDR = ADDR_GEN + offsetof(struct eeprom_general, mac_address),
+ ADDR_GEN_TEMP_DIFF = ADDR_GEN + offsetof(struct eeprom_general, temp_diff),
+ ADDR_GEN_SERIAL_NUMBER = ADDR_GEN + offsetof(struct eeprom_general, serial_number),
+ ADDR_GEN_PWR_TABLE_ID = ADDR_GEN + offsetof(struct eeprom_general, pwr_table_id),
+ ADDR_GEN_RESERVED = ADDR_GEN + offsetof(struct eeprom_general, reserved),
+
+ ADDR_FEM = offsetof(struct eeprom, fem),
+ ADDR_FEM_WIRING_ID = ADDR_FEM + offsetof(struct eeprom_fem, wiring_id),
+ ADDR_FEM_LUT = ADDR_FEM + offsetof(struct eeprom_fem, fem_lut),
+ ADDR_FEM_PLATFORM_ID = ADDR_FEM + offsetof(struct eeprom_fem, platform_id),
+ ADDR_FEM_RESERVED = ADDR_FEM + offsetof(struct eeprom_fem, reserved),
+
+ ADDR_CALIB_POWER = offsetof(struct eeprom, calib_power),
+ ADDR_CALIB_POWER_FREQ_OFFSET = ADDR_CALIB_POWER +
+ offsetof(struct eeprom_calib_power, freq_offset),
+ ADDR_CALIB_POWER_CHAN_BMP = ADDR_CALIB_POWER +
+ offsetof(struct eeprom_calib_power, chan_bmp),
+ ADDR_CALIB_POWER_PHY = ADDR_CALIB_POWER +
+ offsetof(struct eeprom_calib_power, phy_calib),
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ ADDR_CALIB_IQ_DCOC = offsetof(struct eeprom, calib_iq_dcoc),
+ ADDR_CALIB_IQ_DCOC_VERSION = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_version),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_20MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_20mhz_bmp_tcv0),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_20MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_20mhz_bmp_tcv1),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_40MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_40mhz_bmp_tcv0),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_40MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_40mhz_bmp_tcv1),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_80MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_80mhz_bmp_tcv0),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_80MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_80mhz_bmp_tcv1),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_160mhz_bmp_tcv0),
+ ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, chan_160mhz_bmp_tcv1),
+ ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_20_data_tcv0),
+ ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_20_data_tcv1),
+ ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_40_data_tcv0),
+ ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_40_data_tcv1),
+ ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_80_data_tcv0),
+ ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_80_data_tcv1),
+ ADDR_CALIB_IQ_DCOC_DATA_160MHZ_TCV0 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_160_data_tcv0),
+ ADDR_CALIB_IQ_DCOC_DATA_160MHZ_TCV1 = ADDR_CALIB_IQ_DCOC +
+ offsetof(struct eeprom_calib_iq_dcoc, calib_160_data_tcv1),
+#endif
+ SIZE_HW = sizeof(struct eeprom_hw),
+ SIZE_HW_RESERVED = ADDR_GEN - ADDR_HW_RESERVED,
+
+ SIZE_GEN = sizeof(struct eeprom_general),
+ SIZE_GEN_VERSION = ADDR_GEN_FLAVOR - ADDR_GEN_VERSION,
+ SIZE_GEN_FLAVOR = ADDR_GEN_MAC_ADDR - ADDR_GEN_FLAVOR,
+ SIZE_GEN_MAC_ADDR = ADDR_GEN_TEMP_DIFF - ADDR_GEN_MAC_ADDR,
+ SIZE_GEN_TEMP_DIFF = ADDR_GEN_SERIAL_NUMBER - ADDR_GEN_TEMP_DIFF,
+ SIZE_GEN_SERIAL_NUMBER = ADDR_GEN_PWR_TABLE_ID - ADDR_GEN_SERIAL_NUMBER,
+ SIZE_GEN_PWR_TABLE_ID = ADDR_GEN_RESERVED - ADDR_GEN_PWR_TABLE_ID,
+ SIZE_GEN_RESERVED = ADDR_FEM - ADDR_GEN_RESERVED,
+
+ SIZE_FEM = sizeof(struct eeprom_fem),
+ SIZE_FEM_WIRING_ID = ADDR_FEM_LUT - ADDR_FEM_WIRING_ID,
+ SIZE_FEM_LUT = ADDR_FEM_PLATFORM_ID - ADDR_FEM_LUT,
+ SIZE_FEM_PLATFORM_ID = ADDR_FEM_RESERVED - ADDR_FEM_PLATFORM_ID,
+
+ SIZE_CALIB_POWER = sizeof(struct eeprom_calib_power),
+ SIZE_CALIB_POWER_FREQ_OFFSET = ADDR_CALIB_POWER_CHAN_BMP - ADDR_CALIB_POWER_FREQ_OFFSET,
+ SIZE_CALIB_POWER_CHAN_BMP = ADDR_CALIB_POWER_PHY - ADDR_CALIB_POWER_CHAN_BMP,
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ SIZE_CALIB_POWER_PHY = ADDR_CALIB_IQ_DCOC_VERSION - ADDR_CALIB_POWER_PHY,
+#else
+ SIZE_CALIB_POWER_PHY = sizeof(struct eeprom_phy_calib) * NUM_PIVOT_PHYS,
+#endif
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ SIZE_CALIB_IQ_DCOC_VERSION = ADDR_CALIB_IQ_DCOC_CHANNEL_20MHZ_BMP_TCV0 -
+ ADDR_CALIB_IQ_DCOC_VERSION,
+ SIZE_CALIB_IQ_DCOC_20MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC_CHANNEL_20MHZ_BMP_TCV1 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_20MHZ_BMP_TCV0,
+ SIZE_CALIB_IQ_DCOC_20MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC_CHANNEL_40MHZ_BMP_TCV0 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_20MHZ_BMP_TCV1,
+ SIZE_CALIB_IQ_DCOC_40MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC_CHANNEL_40MHZ_BMP_TCV1 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_40MHZ_BMP_TCV0,
+ SIZE_CALIB_IQ_DCOC_40MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC_CHANNEL_80MHZ_BMP_TCV0 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_40MHZ_BMP_TCV1,
+ SIZE_CALIB_IQ_DCOC_80MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC_CHANNEL_80MHZ_BMP_TCV1 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_80MHZ_BMP_TCV0,
+ SIZE_CALIB_IQ_DCOC_80MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV0 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_80MHZ_BMP_TCV1,
+ SIZE_CALIB_IQ_DCOC_160MHZ_BMP_TCV0 = ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV1 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV0,
+ SIZE_CALIB_IQ_DCOC_160MHZ_BMP_TCV1 = ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV0 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV1,
+ SIZE_CALIB_IQ_DCOC_DATA_20MHZ_TCV0 = ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV1 -
+ ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV0,
+ SIZE_CALIB_IQ_DCOC_DATA_20MHZ_TCV1 = ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV0 -
+ ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV1,
+ SIZE_CALIB_IQ_DCOC_DATA_40MHZ_TCV0 = ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV1 -
+ ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV0,
+ SIZE_CALIB_IQ_DCOC_DATA_40MHZ_TCV1 = ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV0 -
+ ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV1,
+ SIZE_CALIB_IQ_DCOC_DATA_80MHZ_TCV0 = ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV1 -
+ ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV0,
+ SIZE_CALIB_IQ_DCOC_DATA_80MHZ_TCV1 = ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV0 -
+ ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV1,
+ SIZE_CALIB_IQ_DCOC_DATA_160MHZ_TCV0 = ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV1 -
+ ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV0,
+ SIZE_CALIB_IQ_DCOC_DATA_160MHZ_TCV1 = sizeof(struct eeprom_calib_data) *
+ ADDR_CALIB_IQ_DCOC_CHANNEL_160MHZ_BMP_TCV1,
+ EEPROM_BASIC_NUM_BYTES = sizeof(struct eeprom) - sizeof(struct eeprom_calib_iq_dcoc),
+#else
+ EEPROM_BASIC_NUM_BYTES = sizeof(struct eeprom),
+#endif
+ EEPROM_NUM_BYTES = sizeof(struct eeprom),
+
+ EEPROM_LAST_BYTE = EEPROM_NUM_BYTES - 1,
+};
+
+#endif /* CL_EEPROM_H */
--
2.36.1


2022-05-24 14:33:22

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 74/96] cl8k: add sta.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/sta.c | 507 +++++++++++++++++++++++++
1 file changed, 507 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/sta.c

diff --git a/drivers/net/wireless/celeno/cl8k/sta.c b/drivers/net/wireless/celeno/cl8k/sta.c
new file mode 100644
index 000000000000..26c280b05266
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/sta.c
@@ -0,0 +1,507 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "chip.h"
+#include "phy.h"
+#include "bf.h"
+#include "vns.h"
+#include "tx.h"
+#include "radio.h"
+#include "motion_sense.h"
+#include "mac_addr.h"
+#include "recovery.h"
+#include "dfs.h"
+#include "stats.h"
+#include "utils.h"
+#include "sta.h"
+
+void cl_sta_init(struct cl_hw *cl_hw)
+{
+ u32 i;
+
+ rwlock_init(&cl_hw->cl_sta_db.lock);
+ INIT_LIST_HEAD(&cl_hw->cl_sta_db.head);
+
+ for (i = 0; i < CL_STA_HASH_SIZE; i++)
+ INIT_LIST_HEAD(&cl_hw->cl_sta_db.hash[i]);
+}
+
+void cl_sta_init_sta(struct cl_hw *cl_hw, struct ieee80211_sta *sta)
+{
+ if (!cl_recovery_in_progress(cl_hw)) {
+ struct cl_sta *cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+
+ /* Reset all cl_sta strcture */
+ memset(cl_sta, 0, sizeof(struct cl_sta));
+ cl_sta->sta = sta;
+ /*
+ * Set sta_idx to 0xFF since FW expects this value as long as
+ * the STA is not fully connected
+ */
+ cl_sta->sta_idx = STA_IDX_INVALID;
+ }
+}
+
+static void cl_sta_add_to_lut(struct cl_hw *cl_hw, struct cl_vif *cl_vif, struct cl_sta *cl_sta)
+{
+ write_lock_bh(&cl_hw->cl_sta_db.lock);
+
+ cl_hw->cl_sta_db.num++;
+ cl_vif->num_sta++;
+ cl_hw->cl_sta_db.lut[cl_sta->sta_idx] = cl_sta;
+
+ /* Done here inside the lock because it allocates cl_stats */
+ cl_stats_sta_add(cl_hw, cl_sta);
+
+ write_unlock_bh(&cl_hw->cl_sta_db.lock);
+
+ cl_dbg_verbose(cl_hw, "mac=%pM, sta_idx=%u, vif_index=%u\n",
+ cl_sta->addr, cl_sta->sta_idx, cl_sta->cl_vif->vif_index);
+}
+
+static void cl_sta_add_to_list(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ u8 hash_idx = CL_STA_HASH_IDX(cl_sta->addr[5]);
+
+ write_lock_bh(&cl_hw->cl_sta_db.lock);
+
+ /* Add to hash table */
+ list_add(&cl_sta->list_hash, &cl_hw->cl_sta_db.hash[hash_idx]);
+
+ /* Make sure that cl_sta's are stored in the list according to their sta_idx. */
+ if (list_empty(&cl_hw->cl_sta_db.head)) {
+ list_add(&cl_sta->list, &cl_hw->cl_sta_db.head);
+ } else if (list_is_singular(&cl_hw->cl_sta_db.head)) {
+ struct cl_sta *cl_sta_singular =
+ list_first_entry(&cl_hw->cl_sta_db.head, struct cl_sta, list);
+
+ if (cl_sta_singular->sta_idx < cl_sta->sta_idx)
+ list_add_tail(&cl_sta->list, &cl_hw->cl_sta_db.head);
+ else
+ list_add(&cl_sta->list, &cl_hw->cl_sta_db.head);
+ } else {
+ struct cl_sta *cl_sta_last =
+ list_last_entry(&cl_hw->cl_sta_db.head, struct cl_sta, list);
+
+ if (cl_sta->sta_idx > cl_sta_last->sta_idx) {
+ list_add_tail(&cl_sta->list, &cl_hw->cl_sta_db.head);
+ } else {
+ struct cl_sta *cl_sta_next = NULL;
+ struct cl_sta *cl_sta_prev = NULL;
+
+ list_for_each_entry(cl_sta_next, &cl_hw->cl_sta_db.head, list) {
+ if (cl_sta_next->sta_idx < cl_sta->sta_idx)
+ continue;
+
+ cl_sta_prev = list_prev_entry(cl_sta_next, list);
+ __list_add(&cl_sta->list, &cl_sta_prev->list, &cl_sta_next->list);
+ break;
+ }
+ }
+ }
+
+ write_unlock_bh(&cl_hw->cl_sta_db.lock);
+
+ cl_sta->add_complete = true;
+}
+
+static void cl_connection_data_add(struct cl_vif *cl_vif)
+{
+ struct cl_connection_data *conn_data = cl_vif->conn_data;
+
+ if (cl_vif->num_sta > conn_data->max_client) {
+ conn_data->max_client = cl_vif->num_sta;
+ conn_data->max_client_timestamp = ktime_get_real_seconds();
+ }
+
+ if (cl_vif->num_sta == conn_data->watermark_threshold)
+ conn_data->watermark_reached_cnt++;
+}
+
+static void cl_connection_data_remove(struct cl_vif *cl_vif)
+{
+ struct cl_connection_data *conn_data = cl_vif->conn_data;
+
+ if (cl_vif->num_sta == conn_data->watermark_threshold)
+ conn_data->watermark_reached_cnt++;
+}
+
+static void _cl_sta_add(struct cl_hw *cl_hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta)
+{
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+ struct cl_sta *cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+
+ /* !!! Must be first !!! */
+ cl_sta_add_to_lut(cl_hw, cl_vif, cl_sta);
+
+ cl_baw_init(cl_sta);
+ cl_txq_sta_add(cl_hw, cl_sta);
+ cl_vns_sta_add(cl_hw, cl_sta);
+ cl_connection_data_add(cl_vif);
+
+ /*
+ * Add rssi of association request to rssi pool
+ * Make sure to call it before cl_wrs_api_sta_add()
+ */
+ cl_rssi_assoc_find(cl_hw, cl_sta, cl_hw->cl_sta_db.num);
+
+ cl_motion_sense_sta_add(cl_hw, cl_sta);
+ cl_bf_sta_add(cl_hw, cl_sta, sta);
+ cl_wrs_api_sta_add(cl_hw, sta);
+ cl_wrs_api_set_smps_mode(cl_hw, sta, sta->bandwidth);
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ /* Should be called after cl_wrs_api_sta_add() */
+ cl_dyn_mcast_rate_update_upon_assoc(cl_hw, cl_sta->wrs_sta.mode,
+ cl_hw->cl_sta_db.num);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_rate_update_upon_assoc(cl_hw,
+ cl_sta->wrs_sta.tx_su_params.rate_params.mcs,
+ cl_hw->cl_sta_db.num);
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+
+ /* !!! Must be last !!! */
+ cl_sta_add_to_list(cl_hw, cl_sta);
+}
+
+/*
+ * Parse the ampdu density to retrieve the value in usec, according to
+ * the values defined in ieee80211.h
+ */
+static u8 cl_sta_density2usec(u8 ampdu_density)
+{
+ switch (ampdu_density) {
+ case IEEE80211_HT_MPDU_DENSITY_NONE:
+ return 0;
+ /* 1 microsecond is our granularity */
+ case IEEE80211_HT_MPDU_DENSITY_0_25:
+ case IEEE80211_HT_MPDU_DENSITY_0_5:
+ case IEEE80211_HT_MPDU_DENSITY_1:
+ return 1;
+ case IEEE80211_HT_MPDU_DENSITY_2:
+ return 2;
+ case IEEE80211_HT_MPDU_DENSITY_4:
+ return 4;
+ case IEEE80211_HT_MPDU_DENSITY_8:
+ return 8;
+ case IEEE80211_HT_MPDU_DENSITY_16:
+ return 16;
+ default:
+ return 0;
+ }
+}
+
+static void cl_sta_set_min_spacing(struct cl_hw *cl_hw,
+ struct ieee80211_sta *sta)
+{
+ bool is_6g = cl_band_is_6g(cl_hw);
+ u8 sta_min_spacing = 0;
+ struct cl_sta *cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+
+ if (is_6g)
+ sta_min_spacing =
+ cl_sta_density2usec(le16_get_bits(sta->he_6ghz_capa.capa,
+ IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START));
+ else if (sta->ht_cap.ht_supported)
+ sta_min_spacing =
+ cl_sta_density2usec(sta->ht_cap.ampdu_density);
+ else
+ cl_dbg_err(cl_hw, "HT is not supported - cannot set sta_min_spacing\n");
+
+ cl_sta->ampdu_min_spacing =
+ max(cl_sta_density2usec(IEEE80211_HT_MPDU_DENSITY_1), sta_min_spacing);
+}
+
+u32 cl_sta_num(struct cl_hw *cl_hw)
+{
+ u32 num = 0;
+
+ read_lock(&cl_hw->cl_sta_db.lock);
+ num = cl_hw->cl_sta_db.num;
+ read_unlock(&cl_hw->cl_sta_db.lock);
+
+ return num;
+}
+
+u32 cl_sta_num_bh(struct cl_hw *cl_hw)
+{
+ u32 num = 0;
+
+ read_lock_bh(&cl_hw->cl_sta_db.lock);
+ num = cl_hw->cl_sta_db.num;
+ read_unlock_bh(&cl_hw->cl_sta_db.lock);
+
+ return num;
+}
+
+struct cl_sta *cl_sta_get(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ if (sta_idx < CL_MAX_NUM_STA)
+ return cl_hw->cl_sta_db.lut[sta_idx];
+
+ return NULL;
+}
+
+struct cl_sta *cl_sta_get_by_addr(struct cl_hw *cl_hw, u8 *addr)
+{
+ struct cl_sta *cl_sta = NULL;
+ u8 hash_idx = CL_STA_HASH_IDX(addr[5]);
+
+ if (is_multicast_ether_addr(addr))
+ return NULL;
+
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.hash[hash_idx], list_hash)
+ if (cl_mac_addr_compare(cl_sta->addr, addr))
+ return cl_sta;
+
+ return NULL;
+}
+
+void cl_sta_loop(struct cl_hw *cl_hw, sta_callback callback)
+{
+ struct cl_sta *cl_sta = NULL;
+
+ /* Go over all stations */
+ read_lock(&cl_hw->cl_sta_db.lock);
+
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.head, list)
+ callback(cl_hw, cl_sta);
+
+ read_unlock(&cl_hw->cl_sta_db.lock);
+}
+
+void cl_sta_loop_bh(struct cl_hw *cl_hw, sta_callback callback)
+{
+ struct cl_sta *cl_sta = NULL;
+
+ /* Go over all stations - use bottom-half lock */
+ read_lock_bh(&cl_hw->cl_sta_db.lock);
+
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.head, list)
+ callback(cl_hw, cl_sta);
+
+ read_unlock_bh(&cl_hw->cl_sta_db.lock);
+}
+
+static int cl_sta_add_to_firmware(struct cl_hw *cl_hw, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct cl_sta *cl_sta = (struct cl_sta *)sta->drv_priv;
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+ struct mm_sta_add_cfm *sta_add_cfm;
+ int error = 0;
+ u8 recovery_sta_idx = 0;
+ u32 rate_ctrl_info = 0;
+
+ if (cl_recovery_in_progress(cl_hw)) {
+ struct cl_wrs_rate_params *rate_params = &cl_sta->wrs_sta.tx_su_params.rate_params;
+
+ /*
+ * If station is added to firmware during recovery, the driver passes to firmware
+ * the station index to be used instead of firmware selecting a free index
+ */
+ recovery_sta_idx = cl_sta->sta_idx;
+
+ /* Keep current rate value */
+ rate_ctrl_info = cl_rate_ctrl_generate(cl_hw, cl_sta, rate_params->mode,
+ rate_params->bw, rate_params->nss,
+ rate_params->mcs, rate_params->gi,
+ false, false);
+ } else {
+ bool is_cck = cl_band_is_24g(cl_hw) && cl_hw_mode_is_b_or_bg(cl_hw);
+ u8 mode = is_cck ? WRS_MODE_CCK : WRS_MODE_OFDM;
+
+ /*
+ * Not in recovery:
+ * firmware will set sta_idx and will return in confirmation message
+ */
+ recovery_sta_idx = STA_IDX_INVALID;
+
+ /* Default rate value */
+ rate_ctrl_info = cl_rate_ctrl_generate(cl_hw, cl_sta, mode,
+ 0, 0, 0, 0, false, false);
+ }
+
+ /* Must be called before cl_msg_tx_sta_add() */
+ cl_sta_set_min_spacing(cl_hw, sta);
+
+ /* Send message to firmware */
+ error = cl_msg_tx_sta_add(cl_hw, sta, cl_vif, recovery_sta_idx, rate_ctrl_info);
+ if (error)
+ return error;
+
+ sta_add_cfm = (struct mm_sta_add_cfm *)(cl_hw->msg_cfm_params[MM_STA_ADD_CFM]);
+ if (!sta_add_cfm)
+ return -ENOMSG;
+
+ if (sta_add_cfm->status != 0) {
+ cl_dbg_verbose(cl_hw, "Status Error (%u)\n", sta_add_cfm->status);
+ cl_msg_tx_free_cfm_params(cl_hw, MM_STA_ADD_CFM);
+ return -EIO;
+ }
+
+ /* Save the index retrieved from firmware */
+ cl_sta->sta_idx = sta_add_cfm->sta_idx;
+
+ /* Release cfm msg */
+ cl_msg_tx_free_cfm_params(cl_hw, MM_STA_ADD_CFM);
+
+ return 0;
+}
+
+int cl_sta_add(struct cl_hw *cl_hw, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct cl_sta *cl_sta = (struct cl_sta *)sta->drv_priv;
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+ int error = 0;
+
+ if (cl_radio_is_going_down(cl_hw))
+ return -EPERM;
+
+ if (vif->type == NL80211_IFTYPE_MESH_POINT &&
+ cl_vif->num_sta >= CL_MAX_NUM_MESH_POINT)
+ return -EPERM;
+
+ cl_sta->cl_vif = cl_vif;
+ cl_mac_addr_copy(cl_sta->addr, sta->addr);
+
+ error = cl_sta_add_to_firmware(cl_hw, vif, sta);
+ if (error)
+ return error;
+
+ if (!cl_recovery_in_progress(cl_hw))
+ if (vif->type != NL80211_IFTYPE_STATION ||
+ cl_hw->chip->conf->ce_production_mode)
+ _cl_sta_add(cl_hw, vif, sta);
+
+ if (vif->type == NL80211_IFTYPE_MESH_POINT && cl_vif->num_sta == 1)
+ set_bit(CL_DEV_MESH_AP, &cl_hw->drv_flags);
+
+ return 0;
+}
+
+void cl_sta_mgd_add(struct cl_hw *cl_hw, struct cl_vif *cl_vif, struct ieee80211_sta *sta)
+{
+ /* Should be called in station mode */
+ struct cl_sta *cl_sta = (struct cl_sta *)sta->drv_priv;
+
+ /* !!! Must be first !!! */
+ cl_sta_add_to_lut(cl_hw, cl_vif, cl_sta);
+
+ cl_baw_init(cl_sta);
+ cl_txq_sta_add(cl_hw, cl_sta);
+ cl_vns_sta_add(cl_hw, cl_sta);
+
+ /*
+ * Add rssi of association response to rssi pool
+ * Make sure to call it before cl_wrs_api_sta_add()
+ */
+ cl_rssi_assoc_find(cl_hw, cl_sta, cl_hw->cl_sta_db.num);
+
+ cl_connection_data_add(cl_vif);
+
+ /* In station mode we assume that the AP we connect to is static */
+ cl_motion_sense_sta_add(cl_hw, cl_sta);
+ cl_bf_sta_add(cl_hw, cl_sta, sta);
+ cl_wrs_api_sta_add(cl_hw, sta);
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ /* Should be called after cl_wrs_api_sta_add() */
+ cl_dyn_mcast_rate_update_upon_assoc(cl_hw, cl_sta->wrs_sta.mode,
+ cl_hw->cl_sta_db.num);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_rate_update_upon_assoc(cl_hw,
+ cl_sta->wrs_sta.tx_su_params.rate_params.mcs,
+ cl_hw->cl_sta_db.num);
+
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+ /* !!! Must be last !!! */
+ cl_sta_add_to_list(cl_hw, cl_sta);
+}
+
+static void _cl_sta_remove(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ write_lock_bh(&cl_hw->cl_sta_db.lock);
+
+ list_del(&cl_sta->list);
+ list_del(&cl_sta->list_hash);
+
+ cl_hw->cl_sta_db.lut[cl_sta->sta_idx] = NULL;
+ cl_hw->cl_sta_db.num--;
+ cl_sta->cl_vif->num_sta--;
+
+ cl_dbg_verbose(cl_hw, "mac=%pM, sta_idx=%u, vif_index=%u\n",
+ cl_sta->addr, cl_sta->sta_idx, cl_sta->cl_vif->vif_index);
+
+ write_unlock_bh(&cl_hw->cl_sta_db.lock);
+}
+
+void cl_sta_remove(struct cl_hw *cl_hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta)
+{
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+ struct cl_sta *cl_sta = (struct cl_sta *)sta->drv_priv;
+ u8 sta_idx = cl_sta->sta_idx;
+
+ cl_sta->remove_start = true;
+
+ /* !!! Must be first - remove from list and LUT !!! */
+ _cl_sta_remove(cl_hw, cl_sta);
+
+ cl_traffic_sta_remove(cl_hw, cl_sta);
+ cl_bf_sta_remove(cl_hw, cl_sta);
+ cl_connection_data_remove(cl_vif);
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ cl_dyn_mcast_rate_update_upon_disassoc(cl_hw,
+ cl_sta->wrs_sta.mode,
+ cl_hw->cl_sta_db.num);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_rate_update_upon_disassoc(cl_hw,
+ cl_sta->wrs_sta.tx_su_params.rate_params.mcs,
+ cl_hw->cl_sta_db.num);
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+ cl_wrs_api_sta_remove(cl_hw, cl_sta);
+ cl_stats_sta_remove(cl_hw, cl_sta);
+
+ /*
+ * TX stop flow:
+ * 1) Flush TX queues
+ * 2) Poll confirmation queue and clear enhanced TIM
+ * 3) Send MM_STA_DEL_REQ message to firmware
+ * 4) Flush confirmation queue
+ * 5) Reset write index
+ */
+
+ cl_txq_flush_sta(cl_hw, cl_sta);
+ cl_single_cfm_poll_empty_sta(cl_hw, sta_idx);
+ cl_txq_sta_remove(cl_hw, sta_idx);
+
+ if (cl_vif->vif->type == NL80211_IFTYPE_MESH_POINT &&
+ cl_vif->num_sta == 0) {
+ clear_bit(CL_DEV_MESH_AP, &cl_hw->drv_flags);
+ }
+
+ cl_single_cfm_flush_sta(cl_hw, sta_idx);
+
+ cl_msg_tx_sta_del(cl_hw, sta_idx);
+
+ if (cl_vif->num_sta == 0)
+ cl_radio_off_wake_up(cl_hw);
+}
+
+void cl_sta_ps_notify(struct cl_hw *cl_hw, struct cl_sta *cl_sta, bool is_ps)
+{
+ struct sta_info *stainfo = IEEE80211_STA_TO_STAINFO(cl_sta->sta);
+
+ /*
+ * PS-Poll & UAPSD are handled by FW, by setting
+ * WLAN_STA_SP we ensure mac80211 does not re-handle.
+ * flag is unset at ieee80211_sta_ps_deliver_wakeup
+ */
+ if (is_ps)
+ set_sta_flag(stainfo, WLAN_STA_SP);
+
+ cl_stats_update_ps(cl_hw, cl_sta, is_ps);
+}
+
--
2.36.1


2022-05-24 14:44:49

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 77/96] cl8k: add stats.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/stats.h | 108 +++++++++++++++++++++++
1 file changed, 108 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/stats.h

diff --git a/drivers/net/wireless/celeno/cl8k/stats.h b/drivers/net/wireless/celeno/cl8k/stats.h
new file mode 100644
index 000000000000..480c00b395f1
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/stats.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_STATS_H
+#define CL_STATS_H
+
+#include "wrs.h"
+
+enum cl_rx_stats_flag {
+ RX_STATS_CCK = 0x01,
+ RX_STATS_OFDM = 0x02,
+ RX_STATS_HT = 0x04,
+ RX_STATS_VHT = 0x08,
+ RX_STATS_HE_SU = 0x10,
+ RX_STATS_HE_MU = 0x20,
+ RX_STATS_HE_EXT = 0x40,
+ RX_STATS_HE_TRIG = 0x80,
+};
+
+struct cl_rx_stats {
+ u32 he_trig[CHNL_BW_MAX_HE][WRS_SS_MAX][WRS_MCS_MAX_HE][WRS_GI_MAX_HE];
+ u32 he_ext[CHNL_BW_MAX_HE][WRS_SS_MAX][WRS_MCS_MAX_HE][WRS_GI_MAX_HE];
+ u32 he_mu[CHNL_BW_MAX_HE][WRS_SS_MAX][WRS_MCS_MAX_HE][WRS_GI_MAX_HE];
+ u32 he_su[CHNL_BW_MAX_HE][WRS_SS_MAX][WRS_MCS_MAX_HE][WRS_GI_MAX_HE];
+ u32 vht[CHNL_BW_MAX_VHT][WRS_SS_MAX][WRS_MCS_MAX_VHT][WRS_GI_MAX_VHT];
+ u32 ht[CHNL_BW_MAX_HT][WRS_SS_MAX][WRS_MCS_MAX_HT][WRS_GI_MAX_HT];
+ u32 ofdm[WRS_MCS_MAX_OFDM];
+ u32 cck[WRS_MCS_MAX_CCK];
+ u8 flag;
+ u64 packet_success;
+};
+
+#define RSSI_ARR_SIZE 128
+#define BF_IDX_MAX 2
+
+struct cl_tx_cntrs {
+ u32 success;
+ u32 fail;
+};
+
+struct cl_tx_stats {
+ struct cl_tx_cntrs he[CHNL_BW_MAX][WRS_SS_MAX][WRS_MCS_MAX][WRS_GI_MAX][BF_IDX_MAX];
+ struct cl_tx_cntrs
+ vht[CHNL_BW_MAX_VHT][WRS_SS_MAX][WRS_MCS_MAX_VHT][WRS_GI_MAX_VHT][BF_IDX_MAX];
+ struct cl_tx_cntrs ht[CHNL_BW_MAX_HT][WRS_SS_MAX][WRS_MCS_MAX_HT][WRS_GI_MAX_HT];
+ struct cl_tx_cntrs ofdm[WRS_MCS_MAX_OFDM];
+ struct cl_tx_cntrs cck[WRS_MCS_MAX_CCK];
+ u32 agg_cntr;
+ u32 fail_cntr;
+ u64 packet_success;
+ u64 packet_fail;
+};
+
+enum cl_ps_period {
+ PS_PERIOD_50MS,
+ PS_PERIOD_100MS,
+ PS_PERIOD_250MS,
+ PS_PERIOD_500MS,
+ PS_PERIOD_750MS,
+ PS_PERIOD_1000MS,
+ PS_PERIOD_2000MS,
+ PS_PERIOD_5000MS,
+ PS_PERIOD_10000MS,
+ PS_PERIOD_ABOVE,
+
+ PS_PERIOD_MAX
+};
+
+struct cl_ps_stats {
+ bool is_ps;
+ unsigned long timestamp_sleep;
+ u32 period[PS_PERIOD_MAX];
+};
+
+enum cl_fec_coding {
+ CL_FEC_CODING_BCC,
+ CL_FEC_CODING_LDPC,
+ CL_FEC_CODING_MAX
+};
+
+struct cl_stats {
+ struct cl_tx_stats tx;
+ struct cl_rx_stats rx;
+ u32 rssi[RSSI_ARR_SIZE][MAX_ANTENNAS];
+ u32 fec_coding[CL_FEC_CODING_MAX];
+ struct cl_ps_stats ps;
+};
+
+struct cl_vif;
+
+void cl_stats_init(struct cl_hw *cl_hw);
+void cl_stats_deinit(struct cl_hw *cl_hw);
+void cl_stats_sta_add(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_stats_sta_remove(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_stats_update_tx_agg(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_agg_tx_report *agg_report);
+void cl_stats_update_tx_single(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_agg_tx_report *agg_report);
+void cl_stats_update_rx_rssi(struct cl_hw *cl_hw, struct cl_sta *cl_sta, s8 rssi[MAX_ANTENNAS]);
+void cl_stats_update_rx_rate(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct hw_rxhdr *rxhdr);
+void cl_stats_update_rx_rate_production(struct cl_hw *cl_hw, struct hw_rxhdr *rxhdr);
+void cl_stats_update_ps(struct cl_hw *cl_hw, struct cl_sta *cl_sta, bool is_ps);
+void cl_stats_get_tx(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ u64 *total_success, u64 *total_fail);
+u64 cl_stats_get_rx(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+int cl_stats_get_rssi(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+
+#endif /* CL_STATS_H */
--
2.36.1


2022-05-24 14:46:23

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 07/96] cl8k: add bf.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/bf.c | 346 ++++++++++++++++++++++++++
1 file changed, 346 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/bf.c

diff --git a/drivers/net/wireless/celeno/cl8k/bf.c b/drivers/net/wireless/celeno/cl8k/bf.c
new file mode 100644
index 000000000000..49d16e13e6e4
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/bf.c
@@ -0,0 +1,346 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "hw.h"
+#include "traffic.h"
+#include "sta.h"
+#include "sounding.h"
+#include "debug.h"
+#include "bf.h"
+
+#define CL_BF_MIN_SOUNDING_NR 3
+
+#define bf_pr(cl_hw, level, ...) \
+ do { \
+ if ((level) <= (cl_hw)->bf_db.dbg_level) \
+ pr_debug("[BF]" __VA_ARGS__); \
+ } while (0)
+
+#define bf_pr_verbose(cl_hw, ...) bf_pr((cl_hw), DBG_LVL_VERBOSE, ##__VA_ARGS__)
+#define bf_pr_err(cl_hw, ...) bf_pr((cl_hw), DBG_LVL_ERROR, ##__VA_ARGS__)
+#define bf_pr_warn(cl_hw, ...) bf_pr((cl_hw), DBG_LVL_WARNING, ##__VA_ARGS__)
+#define bf_pr_trace(cl_hw, ...) bf_pr((cl_hw), DBG_LVL_TRACE, ##__VA_ARGS__)
+#define bf_pr_info(cl_hw, ...) bf_pr((cl_hw), DBG_LVL_INFO, ##__VA_ARGS__)
+
+static bool cl_bf_is_beamformee_capable_he(struct ieee80211_sta *sta, bool mu_cap)
+{
+ u8 phy_cap_info4 = sta->he_cap.he_cap_elem.phy_cap_info[4];
+
+ if (mu_cap)
+ return (phy_cap_info4 & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER);
+ else
+ return (phy_cap_info4 & IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE);
+}
+
+static bool cl_bf_is_beamformee_capable_vht(struct ieee80211_sta *sta, bool mu_cap)
+{
+ u32 vht_cap = sta->vht_cap.cap;
+
+ if (mu_cap)
+ return (vht_cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
+ else
+ return (vht_cap & IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE);
+}
+
+static bool cl_bf_is_beamformee_capable(struct cl_sta *cl_sta, bool mu_cap)
+{
+ struct ieee80211_sta *sta = cl_sta->sta;
+
+ if (sta->he_cap.has_he)
+ return cl_bf_is_beamformee_capable_he(sta, mu_cap);
+
+ if (sta->vht_cap.vht_supported)
+ return cl_bf_is_beamformee_capable_vht(sta, mu_cap);
+
+ return false;
+}
+
+void cl_bf_enable(struct cl_hw *cl_hw, bool enable, bool trigger_decision)
+{
+ struct cl_tcv_conf *conf = cl_hw->conf;
+
+ if (cl_hw->bf_db.enable == enable)
+ return;
+
+ if (!conf->ci_bf_en && enable) {
+ bf_pr_err(cl_hw, "Unable to enable - BF is globally disabled\n");
+ return;
+ }
+
+ cl_hw->bf_db.enable = enable;
+ bf_pr_verbose(cl_hw, "%s\n", enable ? "Enable" : "Disable");
+
+ if (trigger_decision)
+ cl_sta_loop_bh(cl_hw, cl_bf_sounding_decision);
+}
+
+static void cl_bf_timer_callback(struct timer_list *t)
+{
+ /*
+ * If timer expired it means that we started sounding but didn't get any
+ * indication for (10 * sounding_interval).
+ * So we disable sounding for this station (even when in starts again traffic).
+ */
+ struct cl_bf_sta_db *bf_db = from_timer(bf_db, t, timer);
+ struct cl_sta *cl_sta = container_of(bf_db, struct cl_sta, bf_db);
+ struct cl_hw *cl_hw = cl_sta->cl_vif->cl_hw;
+
+ bf_pr_trace(cl_hw, "Failed to get reply (%u)\n", cl_sta->sta_idx);
+ bf_db->indication_timeout = true;
+ cl_bf_sounding_decision(cl_hw, cl_sta);
+}
+
+static void cl_bf_reset_sounding_info(struct cl_sta *cl_sta)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ bf_db->synced = false;
+ bf_db->sounding_start = false;
+ bf_db->sounding_indications = 0;
+}
+
+void cl_bf_sounding_start(struct cl_hw *cl_hw, enum sounding_type type, struct cl_sta **cl_sta_arr,
+ u8 sta_num, struct cl_sounding_info *recovery_elem)
+{
+#define STA_INDICES_STR_SIZE 64
+
+ /* Send request to start sounding */
+ u8 i, bw = CHNL_BW_MAX;
+ char sta_indices_str[STA_INDICES_STR_SIZE] = {0};
+ u8 str_len = 0;
+
+ for (i = 0; i < sta_num; i++) {
+ struct cl_sta *cl_sta = cl_sta_arr[i];
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ bw = cl_sta->wrs_sta.assoc_bw;
+ bf_db->synced = false;
+ bf_db->sounding_start = true;
+ bf_db->sounding_indications = 0;
+
+ str_len += snprintf(sta_indices_str, STA_INDICES_STR_SIZE - str_len, "%u%s",
+ cl_sta->sta_idx, (i == sta_num - 1 ? ", " : ""));
+ }
+
+ bf_pr_trace(cl_hw, "Start sounding: Sta = %s\n", sta_indices_str);
+ cl_sounding_send_request(cl_hw, cl_sta_arr, sta_num, SOUNDING_ENABLE, type, bw, NULL, 0,
+ recovery_elem);
+
+#undef STA_INDICES_STR_SIZE
+}
+
+void cl_bf_sounding_stop(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ if (bf_db->sounding_start) {
+ /* Send request to stop sounding */
+ cl_bf_reset_sounding_info(cl_sta);
+ bf_pr_trace(cl_hw, "Sta = %u, Stop sounding\n", cl_sta->sta_idx);
+ cl_sounding_send_request(cl_hw, &cl_sta, 1, SOUNDING_DISABLE, SOUNDING_TYPE_HE_SU,
+ 0, NULL, 0, NULL);
+ bf_pr_trace(cl_hw, "Sta: %u, Beamforming disabled\n", cl_sta->sta_idx);
+ }
+}
+
+void cl_bf_sounding_decision(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ if (cl_bf_is_enabled(cl_hw) &&
+ cl_bf_is_beamformee_capable(cl_sta, false) &&
+ !bf_db->indication_timeout &&
+ ((bf_db->beamformee_sts + 1) >= CL_BF_MIN_SOUNDING_NR) &&
+ (bf_db->traffic_active || cl_hw->bf_db.force)) {
+ if (!bf_db->sounding_start) {
+ if (cl_sta->su_sid == INVALID_SID)
+ cl_bf_sounding_start(cl_hw, SOUNDING_TYPE_HE_SU, &cl_sta, 1, NULL);
+ else
+ bf_pr_verbose(cl_hw, "[%s]: STA %u already belongs to sid %u\n",
+ __func__, cl_sta->sta_idx, cl_sta->su_sid);
+ }
+ } else {
+ del_timer(&bf_db->timer);
+ cl_bf_sounding_stop(cl_hw, cl_sta);
+ }
+}
+
+static u8 cl_bf_get_sts_he(struct ieee80211_sta *sta)
+{
+ u8 *phy_cap_info = sta->he_cap.he_cap_elem.phy_cap_info;
+
+ if (phy_cap_info[0] & IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G ||
+ phy_cap_info[0] & IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)
+ return u8_get_bits(phy_cap_info[4],
+ IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_MASK);
+ else
+ return u8_get_bits(phy_cap_info[4],
+ IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_MASK);
+}
+
+static u8 cl_bf_get_sts_vht(struct ieee80211_sta *sta)
+{
+ struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap;
+
+ return u32_get_bits(vht_cap->cap, IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK);
+}
+
+static u8 cl_bf_get_sts(struct ieee80211_sta *sta)
+{
+ if (sta->he_cap.has_he)
+ return cl_bf_get_sts_he(sta);
+
+ return cl_bf_get_sts_vht(sta);
+}
+
+void cl_bf_update_rate(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ /* Old & new BF state for main rate */
+ bool bf_on_old = bf_db->is_on;
+ bool bf_on_new = cl_bf_is_on(cl_hw, cl_sta, bf_db->num_ss);
+
+ /* Old & new BF state for fallback rate */
+ bool bf_on_old_fbk = bf_db->is_on_fallback;
+ bool bf_on_new_fbk = cl_bf_is_on(cl_hw, cl_sta, bf_db->num_ss_fallback);
+
+ if (bf_on_old != bf_on_new || bf_on_old_fbk != bf_on_new_fbk) {
+ /* BF state for main rate or fallback rate changed */
+
+ /* Save the new state */
+ bf_db->is_on = bf_on_new;
+ bf_db->is_on_fallback = bf_on_new_fbk;
+
+ /* Update the firmware */
+ if (cl_msg_tx_set_tx_bf(cl_hw, cl_sta->sta_idx, bf_on_new, bf_on_new_fbk))
+ pr_err("%s: failed to set TX-BF\n", __func__);
+ }
+}
+
+void cl_bf_sta_add(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct ieee80211_sta *sta)
+{
+ /* Beamformee capabilities */
+ bool su_beamformee_capable = cl_bf_is_beamformee_capable(cl_sta, false);
+ bool mu_beamformee_capable = cl_bf_is_beamformee_capable(cl_sta, true);
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ WARN_ON_ONCE(sta->rx_nss == 0);
+ bf_db->beamformee_sts = cl_bf_get_sts(sta);
+ bf_db->nc = min_t(u8, sta->rx_nss, WRS_SS_MAX) - 1;
+ cl_sta->su_sid = INVALID_SID;
+
+ bf_pr_trace(cl_hw,
+ "sta_idx: %u, su_beamformee_capable: %u, mu_beamformee_capable: %u, "
+ "beamformee_sts: %u, nc = %u\n",
+ cl_sta->sta_idx, su_beamformee_capable, mu_beamformee_capable,
+ bf_db->beamformee_sts, bf_db->nc);
+
+ if (bf_db->beamformee_sts == 0)
+ bf_db->beamformee_sts = 3;
+
+ /*
+ * Init the BF timer
+ * Period is set to 0. It will be updated before enabling it.
+ */
+ timer_setup(&bf_db->timer, cl_bf_timer_callback, 0);
+}
+
+void cl_bf_sta_remove(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ /* Disable timer before removing the station */
+ del_timer_sync(&bf_db->timer);
+
+ /*
+ * Remove the sounding sequence associated with the STA and possibly start another sequence
+ * for other stations that participate in the same sounding sequence with the STA
+ */
+ if (cl_sta->su_sid != INVALID_SID) {
+ bf_db->sounding_remove_required = true;
+ cl_sounding_stop_by_sid(cl_hw, cl_sta->su_sid, true);
+ }
+}
+
+void cl_bf_sta_active(struct cl_hw *cl_hw, struct cl_sta *cl_sta, bool active)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ if (bf_db->traffic_active != active) {
+ bf_pr_trace(cl_hw, "Sta: %u, Active: %s\n",
+ cl_sta->sta_idx, active ? "True" : " False");
+
+ bf_db->traffic_active = active;
+ cl_bf_sounding_decision(cl_hw, cl_sta);
+ }
+}
+
+void cl_bf_reset_sounding_ind(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ cl_sta->bf_db.sounding_indications = 0;
+}
+
+bool cl_bf_is_enabled(struct cl_hw *cl_hw)
+{
+ return cl_hw->bf_db.enable;
+}
+
+bool cl_bf_is_on(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u8 nss)
+{
+ struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
+
+ return (cl_bf_is_enabled(cl_hw) &&
+ bf_db->sounding_start &&
+ bf_db->sounding_indications &&
+ (nss <= min(cl_hw->conf->ci_bf_max_nss, bf_db->nc)));
+}
+
+void cl_bf_sounding_req_success(struct cl_hw *cl_hw, struct cl_sounding_info *new_elem)
+{
+ /*
+ * Start a timer to check that we are receiving indications from the station.
+ * The period of the timer is set to 10 times the sounding-interval.
+ */
+ u8 i;
+ struct cl_sta *cl_sta;
+ struct cl_bf_sta_db *bf_db;
+ unsigned long period = CL_SOUNDING_FACTOR * cl_sounding_get_interval(cl_hw);
+
+ for (i = 0; i < new_elem->sta_num; i++) {
+ cl_sta = new_elem->su_cl_sta_arr[i];
+ bf_db = &cl_sta->bf_db;
+
+ if (cl_sta) {
+ cl_sta->bf_db.sounding_start = true;
+ cl_sta->su_sid = new_elem->sounding_id;
+
+ /* Don't enable BF timer in case of force mode */
+ if (!cl_hw->bf_db.force)
+ mod_timer(&bf_db->timer, jiffies + msecs_to_jiffies(period));
+ }
+ }
+}
+
+void cl_bf_sounding_req_failure(struct cl_hw *cl_hw, struct cl_sounding_info *new_elem)
+{
+ u8 i;
+ struct cl_sta *cl_sta;
+ struct cl_bf_sta_db *bf_db;
+
+ for (i = 0; i < new_elem->sta_num; i++) {
+ cl_sta = new_elem->su_cl_sta_arr[i];
+
+ if (cl_sta) {
+ bf_db = &cl_sta->bf_db;
+ bf_db->sounding_start = false;
+ bf_db->sounding_indications = 0;
+ }
+ }
+}
+
+void cl_bf_init(struct cl_hw *cl_hw)
+{
+ cl_bf_enable(cl_hw, cl_hw->conf->ci_bf_en, false);
+}
+
--
2.36.1


2022-05-24 14:51:33

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 87/96] cl8k: add utils.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/utils.h | 185 +++++++++++++++++++++++
1 file changed, 185 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/utils.h

diff --git a/drivers/net/wireless/celeno/cl8k/utils.h b/drivers/net/wireless/celeno/cl8k/utils.h
new file mode 100644
index 000000000000..052687183dd3
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/utils.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_UTILS_H
+#define CL_UTILS_H
+
+#include "def.h"
+#include "ieee80211_i.h"
+#include "chip.h"
+
+/*
+ * GI_LTF field for common info
+ * 0 - 1x HE-LTF + 1.6 us GI
+ * 1 - 2x HE-LTF + 1.6 us GI
+ * 2 - 4x HE-LTF + 3.2 us GI
+ */
+enum cl_he_ltf_gi {
+ HE_LTF_X1_GI_16, /* Not supported */
+ HE_LTF_X2_GI_16,
+ HE_LTF_X4_GI_32,
+
+ HE_LTF_MAX
+};
+
+#define CL_TF_GI_LTF_TO_GI(gi_ltf) \
+ ((gi_ltf) == HE_LTF_X4_GI_32 ? WRS_GI_LONG : \
+ ((gi_ltf) == HE_LTF_X2_GI_16 ? WRS_GI_SHORT : \
+ ((gi_ltf) == HE_LTF_X1_GI_16 ? WRS_GI_SHORT : WRS_GI_LONG)))
+
+#define CL_TF_GI_TO_GI_LTF(gi) \
+ ((gi) == WRS_GI_LONG ? HE_LTF_X4_GI_32 : \
+ ((gi) == WRS_GI_SHORT ? HE_LTF_X2_GI_16 : \
+ ((gi) == WRS_GI_VSHORT ? HE_LTF_X2_GI_16 : HE_LTF_X4_GI_32)))
+
+#define CL_TF_RU_ALLOC_MAX_TYPE_1 36
+#define CL_TF_RU_ALLOC_MAX_TYPE_2 52
+#define CL_TF_RU_ALLOC_MAX_TYPE_3 60
+#define CL_TF_RU_ALLOC_MAX_TYPE_4 64
+#define CL_TF_RU_ALLOC_MAX_TYPE_5 66
+#define CL_TF_RU_ALLOC_MAX_TYPE_6 67
+#define CL_TF_RU_ALLOC_MAX_TYPE_7 68
+
+/*
+ *IEEE80211 G Rate provided by Hostapd in WLAN_EID_SUPP_RATES EID
+ *EID Rate = ieee802Rate/5
+ */
+#define CL_80211G_RATE_6MB 12
+#define CL_80211G_RATE_9MB 18
+#define CL_80211G_RATE_12MB 24
+#define CL_80211G_RATE_18MB 36
+#define CL_80211G_RATE_24MB 48
+#define CL_80211G_RATE_36MB 72
+#define CL_80211G_RATE_48MB 96
+#define CL_80211G_RATE_54MB 108
+
+#define CL_SUPP_RATE_MASK 0x7F
+
+#define BAND_IS_5G_6G(cl_hw) \
+ (cl_band_is_5g(cl_hw) || cl_band_is_6g(cl_hw))
+
+static const u8 tid_to_ac[] = {
+ AC_BE, AC_BK, AC_BK, AC_BE, AC_VI, AC_VI, AC_VO, AC_VO
+};
+
+static inline u16 cl_adc_to_mv(u16 adc)
+{
+ return (adc * 1800) >> 12;
+}
+
+static inline struct ieee80211_vif *NETDEV_TO_VIF(struct net_device *dev)
+{
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+
+ if (!wdev)
+ return NULL;
+
+ return wdev_to_ieee80211_vif(wdev);
+}
+
+static inline struct cl_hw *NETDEV_TO_CL_HW(struct net_device *dev)
+{
+ struct ieee80211_hw *hw = wdev_priv(dev->ieee80211_ptr);
+
+ return hw->priv;
+}
+
+static inline struct cl_vif *NETDEV_TO_CL_VIF(struct net_device *dev)
+{
+ struct ieee80211_vif *vif = NETDEV_TO_VIF(dev);
+
+ WARN_ON(!vif);
+
+ if (unlikely(vif->type == NL80211_IFTYPE_AP_VLAN)) {
+ struct cl_hw *cl_hw = NETDEV_TO_CL_HW(dev);
+
+ return cl_vif_get_by_dev(cl_hw, dev);
+ }
+
+ return (struct cl_vif *)vif->drv_priv;
+}
+
+static inline struct cl_vif *TX_INFO_TO_CL_VIF(struct cl_hw *cl_hw,
+ struct ieee80211_tx_info *tx_info)
+{
+ struct ieee80211_vif *vif = tx_info->control.vif;
+
+ WARN_ON(!vif);
+
+ if (unlikely(vif->type == NL80211_IFTYPE_AP_VLAN))
+ return cl_vif_get_by_mac(cl_hw, vif->addr);
+
+ return (struct cl_vif *)(vif->drv_priv);
+}
+
+void cl_hex_dump(char *caption, u8 *buffer, u32 length, u32 offset, bool is_byte);
+u8 cl_convert_gi_format_wrs_to_fw(u8 wrs_mode, u8 gi);
+u8 cl_convert_gi_format_fw_to_wrs(u8 format_mode, u8 gi);
+u8 cl_map_gi_to_ltf(u8 mode, u8 gi);
+s8 cl_calc_noise_floor(struct cl_hw *cl_hw, const s8 *reg_noise_floor);
+u8 cl_convert_signed_to_reg_value(s8 val);
+u8 cl_width_to_bw(enum nl80211_chan_width width);
+u8 cl_center_freq_offset(u8 bw);
+u8 cl_max_bw_idx(u8 wrs_mode, bool is_24g);
+bool cl_hw_mode_is_b_or_bg(struct cl_hw *cl_hw);
+bool cl_is_eapol(struct sk_buff *skb);
+u8 cl_ru_alloc_to_ru_type(u8 ru_alloc);
+bool cl_is_valid_g_rates(const u8 *rate_ie);
+enum cl_wireless_mode cl_recalc_wireless_mode(struct cl_hw *cl_hw,
+ bool ieee80211n,
+ bool ieee80211ac,
+ bool ieee80211ax);
+
+enum cl_mu_ofdma_ru_type {
+ CL_MU_OFDMA_RU_TYPE_NONE = 0,
+ CL_MU_OFDMA_RU_TYPE_26, /* 2.5MHz */
+ CL_MU_OFDMA_RU_TYPE_52, /* 5MHz */
+ CL_MU_OFDMA_RU_TYPE_106, /* 10MHz */
+ CL_MU_OFDMA_RU_TYPE_242, /* 20MHz */
+ CL_MU_OFDMA_RU_TYPE_484, /* 40MHz */
+ CL_MU_OFDMA_RU_TYPE_996, /* 80MHz */
+ CL_MU_OFDMA_RU_TYPE_2x996, /* 160MHz */
+ CL_MU_OFDMA_RU_TYPE_MAX
+};
+
+enum nl80211_he_ru_alloc cl_ru_type_to_nl80211_he_ru_alloc(enum cl_mu_ofdma_ru_type ru_type);
+u8 cl_mu_ofdma_grp_convert_ru_type_to_bw(struct cl_hw *cl_hw, u8 ru_type);
+void cl_ieee802_11_parse_elems(const u8 *ies, size_t ies_len, struct ieee802_11_elems *elems);
+
+/*
+ * cl_file_open_and_read - Read the whole file into an allocated buffer.
+ *
+ * Allocates a buffer large enough to hold the contents of file at @filename and reads the
+ * contents of that file into that buffer. Upon success, the address of the allocated buffer
+ * is returned (which needs to be free later). Upon failure, returns NULL.
+ */
+size_t cl_file_open_and_read(struct cl_chip *chip, const char *filename,
+ char **buf);
+
+/* Traffic analysis */
+/* Check if a packet has specific LLC fields e.g. DSAP, SSAP and Control */
+#define PKT_HAS_LLC_HDR(a) ((a[0] == 0xAA) && (a[1] == 0xAA) && (a[2] == 0x03))
+
+/* Multiply by 4 because IHL is number of 32-bit words */
+#define IPV4_HDR_LEN(ihl) ((ihl) << 2)
+
+bool cl_set_network_header_if_proto(struct sk_buff *skb, u16 protocol);
+bool cl_is_ipv4_packet(struct sk_buff *skb);
+bool cl_is_ipv6_packet(struct sk_buff *skb);
+bool cl_is_tcp_ack(struct sk_buff *skb, bool *syn_rst_push);
+
+/* Band helpers */
+bool cl_band_is_6g(struct cl_hw *cl_hw);
+bool cl_band_is_5g(struct cl_hw *cl_hw);
+bool cl_band_is_24g(struct cl_hw *cl_hw);
+u8 cl_band_to_fw_idx(struct cl_hw *cl_hw);
+u8 cl_band_from_fw_idx(u32 phy_band);
+
+static inline unsigned short cl_get_ether_type(int offset, unsigned char *src_buf)
+{
+ unsigned short type_len = *(unsigned short *)(src_buf + offset);
+
+ return (unsigned short)be16_to_cpu(htons(type_len));
+}
+
+#endif /* CL_UTILS_H */
--
2.36.1


2022-05-24 14:51:55

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 95/96] cl8k: add wrs.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/wrs.h | 565 +++++++++++++++++++++++++
1 file changed, 565 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/wrs.h

diff --git a/drivers/net/wireless/celeno/cl8k/wrs.h b/drivers/net/wireless/celeno/cl8k/wrs.h
new file mode 100644
index 000000000000..158d61b92ffc
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/wrs.h
@@ -0,0 +1,565 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_WRS_H
+#define CL_WRS_H
+
+#include <net/mac80211.h>
+
+#include "def.h"
+#include "debug.h"
+#include "rx.h"
+
+/**
+ * WRS (=Weighted Rate Selection)
+ */
+
+struct cl_hw;
+struct cl_sta;
+struct cl_vif;
+
+/* Rate Table Size */
+#define WRS_HE_RATE_TABLE_SIZE (WRS_MCS_MAX_HE * WRS_SS_MAX * CHNL_BW_MAX * WRS_GI_MAX_HE)
+#define WRS_HT_VHT_RATE_TABLE_SIZE (WRS_MCS_MAX_VHT * WRS_SS_MAX * CHNL_BW_MAX * WRS_GI_MAX_VHT)
+
+/* Initial Thresholds */
+#define WRS_INIT_MSEC_WEIGHT_DOWN (WRS_MAINTENANCE_PERIOD_MS * 3) /* Msec */
+#define WRS_INIT_MSEC_WEIGHT_UP (WRS_MAINTENANCE_PERIOD_MS * 3) /* Msec */
+
+#define WRS_MSEC_WEIGHT_MIN (WRS_MAINTENANCE_PERIOD_MS * 2) /* Msec */
+#define WRS_MSEC_WEIGHT_MAX_UP 30000 /* Msec */
+#define WRS_MSEC_WEIGHT_MAX_DOWN 4000 /* Msec */
+#define WRS_MSEC_STEP_DOWN 5000 /* Msec */
+#define WRS_MSEC_STEP_UP_SAME 1000 /* Msec */
+#define WRS_INVALID_RATE ((u16)(~0))
+
+enum cl_wrs_table_node_up {
+ WRS_TABLE_NODE_UP_MCS,
+ WRS_TABLE_NODE_UP_BW,
+ WRS_TABLE_NODE_UP_NSS,
+ WRS_TABLE_NODE_UP_BF,
+ WRS_TABLE_NODE_UP_GI,
+
+ WRS_TABLE_NODE_UP_MAX
+};
+
+struct cl_wrs_table_validity {
+ bool is_valid;
+ u16 new_rate_idx;
+};
+
+struct cl_wrs_table_node {
+ u16 rate_idx;
+ u16 time_th;
+ bool quick_up_check;
+};
+
+struct cl_wrs_rate {
+ u16 mcs : 4,
+ nss : 3,
+ bw : 2,
+ gi : 2,
+ rsv : 2;
+};
+
+struct cl_wrs_table {
+ struct cl_wrs_rate rate;
+ struct cl_wrs_table_node rate_down;
+ struct cl_wrs_table_node rate_up[WRS_TABLE_NODE_UP_MAX];
+ u32 frames_total;
+ u32 ba_not_rcv_total;
+ u64 epr_acc;
+};
+
+#define WRS_MAINTENANCE_PERIOD_MS 40
+#define WRS_DATA_RATE_FACTOR 10
+#define WRS_RSSI_PROTECT_UP_THR 10
+#define WRS_RSSI_PROTECT_DN_THR 10
+#define WRS_MIN_FRAMES_FOR_DECISION 15
+#define WRS_EPR_FACTOR 105
+#define WRS_CONVERGE_IDLE_PACKET_TH 5
+#define WRS_CONVERGE_IDLE_INTERVAL_RESET 6000 /* 6 sec */
+#define WRS_CONVERGE_IDLE_INTERVAL_RSSI 2000 /* 2 sec */
+#define WRS_CONVERGE_TRFC_INTERVAL_STATIC 30000 /* 30 sec */
+#define WRS_CONVERGE_TRFC_INTERVAL_MOTION 1000 /* 1 sec */
+#define WRS_IMMEDIATE_DROP_EPR_FACTOR 70 /* 70% */
+#define WRS_IMMEDIATE_DROP_MAX_IN_ROW U32_MAX
+#define WRS_SYNC_MIN_ATTEMPTS 4
+#define WRS_SYNC_TIMEOUT 1000 /* 1 sec */
+#define WRS_QUICK_UP_BA_THR 5
+#define WRS_QUICK_UP_INTERVAL_MS 1000
+#define WRS_QUICK_DOWN_EPR_FACTOR 85
+#define WRS_QUICK_DOWN_AGG_THR 3
+#define WRS_QUICK_DOWN_PKT_THR 60
+#define WRS_RSSI_PROTECT_SHIFT 7
+#define WRS_RSSI_PROTECT_BUF_SZ_OLD BIT(WRS_RSSI_PROTECT_SHIFT) /* 2 ^ 7 = 128 */
+#define WRS_RSSI_PROTECT_BUF_SZ_NEW 3
+#define WRS_BA_NOT_RCV_TIME_SINCE_SYNC 1000
+#define WRS_CCA_PERIOD_MS 1000
+#define WRS_CCA_PRIMARY_SHIFT 7
+#define WRS_CCA_PRIMARY_FACTOR 160 /* 160 / 2^7 = 1.25 = 25% */
+
+enum cl_wrs_rssi_prot_mode {
+ WRS_RSSI_PROT_MODE_RSSI, /* Up/down based on rssi */
+ WRS_RSSI_PROT_MODE_NEIGHBOR, /* Up/down based on neighbors */
+
+ WRS_RSSI_PROTECT_MODE_MAX
+};
+
+enum cl_wrs_fixed_rate {
+ WRS_AUTO_RATE,
+ WRS_FIXED_FALLBACK_EN,
+ WRS_FIXED_FALLBACK_DIS,
+
+ WRS_FIXED_RATE_MAX
+};
+
+enum cl_wrs_fixed_param {
+ WRS_FIXED_PARAM_MODE,
+ WRS_FIXED_PARAM_BW,
+ WRS_FIXED_PARAM_NSS,
+ WRS_FIXED_PARAM_MCS,
+ WRS_FIXED_PARAM_GI,
+
+ WRS_FIXED_PARAM_MAX
+};
+
+#define FIXED_RATE_STR(x) \
+ (((x) == WRS_AUTO_RATE) ? "auto rate" : \
+ (((x) == WRS_FIXED_FALLBACK_EN) ? "fixed rate (fallbacks enabled)" : \
+ "fixed rate (fallbacks disabled)"))
+
+enum cl_wrs_decision {
+ WRS_DECISION_NONE,
+ WRS_DECISION_SAME,
+ WRS_DECISION_UP,
+ WRS_DECISION_UP_QUICK,
+ WRS_DECISION_UP_RSSI,
+ WRS_DECISION_UP_MCS1,
+ WRS_DECISION_DOWN,
+ WRS_DECISION_DOWN_RSSI,
+ WRS_DECISION_DOWN_IMMEDIATE,
+ WRS_DECISION_DOWN_QUICK,
+ WRS_DECISION_DOWN_NO_SYNC,
+ WRS_DECISION_RSSI_MGMT,
+ WRS_DECISION_RX_RATE,
+
+ WRS_DECISION_MAX,
+};
+
+enum cl_wrs_mcs {
+ WRS_MCS_0,
+ WRS_MCS_1,
+ WRS_MCS_2,
+ WRS_MCS_3,
+ WRS_MCS_4,
+ WRS_MCS_5,
+ WRS_MCS_6,
+ WRS_MCS_7,
+ WRS_MCS_8,
+ WRS_MCS_9,
+ WRS_MCS_10,
+ WRS_MCS_11,
+ WRS_MCS_MAX,
+};
+
+#define WRS_MCS_MAX_CCK WRS_MCS_4
+#define WRS_MCS_MAX_OFDM WRS_MCS_8
+#define WRS_MCS_MAX_HT WRS_MCS_8
+#define WRS_MCS_MAX_VHT WRS_MCS_10
+#define WRS_MCS_MAX_HE WRS_MCS_MAX
+
+enum cl_wrs_ss {
+ WRS_SS_1,
+ WRS_SS_2,
+ WRS_SS_3,
+ WRS_SS_4,
+
+ WRS_SS_MAX
+};
+
+enum cl_wrs_gi {
+ WRS_GI_LONG,
+ WRS_GI_SHORT,
+ WRS_GI_VSHORT,
+
+ WRS_GI_MAX
+};
+
+#define WRS_GI_MAX_HT WRS_GI_VSHORT
+#define WRS_GI_MAX_VHT WRS_GI_VSHORT
+#define WRS_GI_MAX_HE WRS_GI_MAX
+
+enum cl_wrs_ltf {
+ LTF_X1,
+ LTF_X2,
+ LTF_X4,
+ LTF_MAX
+};
+
+enum cl_wrs_converge_mode {
+ WRS_CONVERGE_MODE_RESET,
+ WRS_CONVERGE_MODE_RSSI,
+
+ WRS_CONVERGE_MODE_MAX,
+};
+
+enum cl_wrs_mode {
+ WRS_MODE_CCK,
+ WRS_MODE_OFDM,
+ WRS_MODE_HT,
+ WRS_MODE_VHT,
+ WRS_MODE_HE,
+
+ WRS_MODE_MAX,
+};
+
+enum cl_wrs_type {
+ WRS_TYPE_TX_SU,
+ WRS_TYPE_TX_MU_MIMO,
+ WRS_TYPE_RX,
+
+ WRS_TYPE_MAX,
+};
+
+#define WRS_TYPE_STR(type) \
+ ((type) == WRS_TYPE_TX_SU ? "TX_SU" : \
+ ((type) == WRS_TYPE_RX ? "RX" : \
+ ((type) == WRS_TYPE_TX_MU_MIMO ? "TX_MU-MIMO" : "")))
+
+#define WRS_TYPE_IS_TX_SU(wrs_params) ((wrs_params)->type == WRS_TYPE_TX_SU)
+#define WRS_TYPE_IS_TX_MU_MIMO(wrs_params) ((wrs_params)->type == WRS_TYPE_TX_MU_MIMO)
+#define WRS_TYPE_IS_RX(wrs_params) ((wrs_params)->type == WRS_TYPE_RX)
+
+/* m MUST be power of 2 ! */
+#define WRS_INC_POW2(c, m) (((c) + 1) & ((m) - 1))
+
+#define WRS_INC(c, m) \
+ do { \
+ (c)++; \
+ if ((c) == (m)) \
+ (c) = 0; \
+ } while (0)
+
+#define WRS_IS_DECISION_UP(decision) \
+ (((decision) >= WRS_DECISION_UP) && ((decision) <= WRS_DECISION_UP_MCS1))
+#define WRS_IS_DECISION_DOWN(decision) \
+ (((decision) >= WRS_DECISION_DOWN) && ((decision) <= WRS_DECISION_DOWN_NO_SYNC))
+
+#define WRS_DECISION_STR(decision) ( \
+ (decision) == WRS_DECISION_NONE ? "NONE" : \
+ (decision) == WRS_DECISION_SAME ? "SAME" : \
+ (decision) == WRS_DECISION_UP ? "UP" : \
+ (decision) == WRS_DECISION_UP_QUICK ? "UP QUICK" : \
+ (decision) == WRS_DECISION_UP_RSSI ? "UP RSSI" : \
+ (decision) == WRS_DECISION_UP_MCS1 ? "UP MCS1" : \
+ (decision) == WRS_DECISION_DOWN ? "DOWN" : \
+ (decision) == WRS_DECISION_DOWN_RSSI ? "DOWN RSSI" : \
+ (decision) == WRS_DECISION_DOWN_IMMEDIATE ? "DOWN IMMEDIATE" : \
+ (decision) == WRS_DECISION_DOWN_QUICK ? "DOWN QUICK" : \
+ (decision) == WRS_DECISION_DOWN_NO_SYNC ? "DOWN NO_SYNC" : \
+ (decision) == WRS_DECISION_RSSI_MGMT ? "RSSI MGMT" : \
+ (decision) == WRS_DECISION_RX_RATE ? "RX_RATE" : \
+ "ERROR")
+
+struct cl_wrs_cntrs {
+ u64 epr_acc;
+ u32 total;
+ u32 fail;
+ u32 ba_not_rcv;
+ u32 ba_not_rcv_consecutive;
+};
+
+struct cl_wrs_rate_params {
+ u16 mode : 3, /* Mode - 0 = CCK, 1 = OFDM, 2 = HT, 3 = VHT, 4 = HE. */
+ gi : 2, /* GI - O = Long, 1 = Short, 2 = Very short. */
+ bw : 2, /* Bandwidth - 0 = 20M, 1 = 40M, 2 = 80M, 3 = 160M. */
+ nss : 3, /* Spatial Streams - 0 = 1SS, 1 = 2SS, .. 7 = 8SS. */
+ mcs : 4, /* MCS - CCK (0 - 3), OFDM/HT (0 - 7), VHT (0 - 9), HE (0 - 11). */
+ fallback_en : 1,
+ is_fixed : 1;
+};
+
+struct cl_wrs_logger {
+ unsigned long timestamp;
+ u16 rate_idx;
+ u32 success;
+ u32 fail;
+ u32 ba_not_rcv;
+ u16 down_rate_idx;
+ u16 up_rate_idx;
+ u16 curr_epr;
+ u16 down_epr;
+ u16 down_epr_factorized;
+ u16 penalty;
+ u16 up_time;
+ enum cl_wrs_decision decision;
+ u16 new_rate_idx;
+};
+
+struct cl_wrs_per_stats {
+ struct list_head list;
+ u8 mcs;
+ u8 bw;
+ u8 nss;
+ u8 gi;
+ u32 frames_total;
+ u32 frames_failed;
+ u64 epr_acc;
+};
+
+struct cl_wrs_rssi_prot_db {
+ s8 samples_old[WRS_RSSI_PROTECT_BUF_SZ_OLD];
+ s8 samples_new[WRS_RSSI_PROTECT_BUF_SZ_NEW];
+ u8 curr_idx_old;
+ u8 curr_idx_new;
+ s32 sum;
+};
+
+struct cl_wrs_params {
+ u8 group_id;
+ u8 is_mu_valid : 1,
+ is_fixed_rate : 2,
+ is_logger_en : 1,
+ quick_up_check : 1,
+ type : 2,
+ rsv : 1;
+ u32 up_same_time_cnt;
+ u32 down_time_cnt;
+ enum cl_wrs_converge_mode converge_mode;
+ u32 converge_time_idle;
+ u32 converge_time_trfc;
+ u16 data_rate;
+ u16 rate_idx;
+ u16 initial_rate_idx;
+ struct cl_wrs_table *table;
+ u16 table_size;
+ u16 penalty_decision_dn;
+ struct cl_wrs_rate_params rate_params;
+ struct cl_wrs_rate rx_rate_idle;
+ enum cl_wrs_decision last_decision;
+ u32 decision_cnt[WRS_DECISION_MAX];
+ struct list_head list_rates;
+ u32 frames_total;
+ u32 fail_total;
+ u32 ba_not_rcv_total;
+ u64 epr_acc;
+ bool calc_ba_not_rcv;
+ bool sync;
+ unsigned long sync_timestamp;
+ unsigned long no_sync_timestamp;
+ struct cl_wrs_logger *logger;
+ u16 logger_idx;
+ u16 logger_size;
+ u32 immediate_drop_cntr;
+ u32 immediate_drop_ignore;
+};
+
+struct cl_wrs_sta {
+ u8 sta_idx;
+ bool smps_enable;
+ u8 assoc_bw;
+ u8 he_minrate;
+ u8 gi_cap[CHNL_BW_MAX];
+ u64 supported_rates[CHNL_BW_MAX];
+ enum cl_wrs_mode mode;
+ struct cl_wrs_rate max_rate_cap;
+ struct cl_wrs_rssi_prot_db rssi_prot_db;
+ struct cl_wrs_params tx_su_params;
+ struct cl_wrs_params *rx_params;
+};
+
+struct cl_wrs_db {
+ /* General */
+ spinlock_t lock;
+ enum cl_dbg_level debug_level;
+ /* Timer */
+ struct timer_list timer_maintenance;
+ u32 interval;
+ /* Fixed rate */
+ u8 is_fixed_rate;
+ /* Conservative initial rate */
+ bool conservative_mcs_noisy_env;
+ bool conservative_nss_noisy_env;
+ /* Immediate drop */
+ bool immediate_drop_en;
+ u8 immediate_drop_epr_factor;
+ u32 immediate_drop_max_in_row;
+ /* Converge idle */
+ bool converge_idle_en;
+ u32 converge_idle_interval_reset;
+ u32 converge_idle_interval_rssi;
+ u32 converge_idle_packet_th;
+ /* Converge traffic */
+ bool converge_trfc_en;
+ u32 converge_trfc_interval_static;
+ u32 converge_trfc_interval_motion;
+ /* Supported rates */
+ u8 mode;
+ u64 ap_supported_rates[CHNL_BW_MAX]; /* Bit array for each bw */
+ struct cl_wrs_rate max_cap;
+ u8 coex_bw;
+ /* RSSI protect */
+ bool rssi_protect_en;
+ u8 rssi_protect_mode;
+ s8 rssi_protect_up_thr;
+ s8 rssi_protect_dn_thr;
+ /* Time + step thresholds */
+ u16 time_th_min;
+ u16 time_th_max_up;
+ u16 time_th_max_down;
+ u16 step_down;
+ u16 step_up_same;
+ /* Quick up */
+ bool quick_up_en;
+ u8 quick_up_ba_thr;
+ u16 quick_up_interval;
+ /* Quick down */
+ bool quick_down_en;
+ u8 quick_down_epr_factor;
+ u8 quick_down_agg_thr;
+ u16 quick_down_pkt_thr;
+ /* BA not received */
+ bool ba_not_rcv_collision_filter;
+ bool ba_not_rcv_force;
+ u32 ba_not_rcv_time_since_sync;
+ /* Sync */
+ u16 sync_timeout;
+ u8 sync_min_attempts;
+ /* CCA counters */
+ unsigned long cca_timestamp;
+ u32 cca_primary;
+ u32 cca_sec80;
+ u32 cca_sec40;
+ u32 cca_sec20;
+ bool adjacent_interference20;
+ bool adjacent_interference40;
+ bool adjacent_interference80;
+ /* All the rest */
+ u32 min_frames_for_decision;
+ u8 epr_factor;
+};
+
+struct cl_wrs_info {
+ u64 epr_acc;
+ u32 success;
+ u32 fail;
+ u32 fail_prev;
+ u32 ba_not_rcv;
+ u8 ba_not_rcv_consecutive;
+ u8 ba_not_rcv_consecutive_max;
+ bool synced;
+ u32 sync_attempts;
+ u8 quick_rate_agg_cntr;
+ u16 quick_rate_pkt_cntr;
+ bool quick_rate_check;
+};
+
+struct cl_wrs_rssi {
+ s32 sum[MAX_ANTENNAS];
+ s32 cnt;
+};
+
+bool cl_wrs_rssi_set_rate(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta);
+void cl_wrs_rssi_prot_start(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+bool cl_wrs_rssi_prot_decision(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params,
+ bool up_rate_valid,
+ u8 up_rate_idx, u8 down_rate_idx);
+u16 cl_wrs_rssi_find_rate(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params,
+ s8 *rssi_sort);
+void cl_wrs_sta_add(struct cl_hw *cl_hw, struct ieee80211_sta *sta);
+bool cl_wrs_sta_add_mu(struct cl_hw *cl_hw, struct cl_wrs_sta *wrs_sta, u8 group_id);
+void cl_wrs_sta_add_rx(struct cl_hw *cl_hw, struct ieee80211_sta *sta);
+void cl_wrs_sta_remove(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db, struct cl_sta *cl_sta);
+bool cl_wrs_sta_remove_mu(struct cl_wrs_db *wrs_db, struct cl_wrs_sta *wrs_sta);
+struct cl_wrs_sta *cl_wrs_sta_get(struct cl_hw *cl_hw, u8 sta_idx);
+void cl_wrs_sta_select_first_rate(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params);
+void cl_wrs_sta_capabilities_set(struct cl_wrs_db *wrs_db, struct ieee80211_sta *sta);
+void cl_wrs_sta_set_supported_rate(struct cl_wrs_sta *wrs_sta, u8 bw, u8 nss, u8 mcs);
+void cl_wrs_stats_per_update(struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params,
+ struct cl_wrs_cntrs *cntrs);
+void cl_wrs_stats_per_init(struct cl_wrs_params *wrs_params);
+void cl_wrs_stats_per_remove(struct cl_wrs_params *wrs_params);
+void cl_wrs_tables_global_build(void);
+void cl_wrs_tables_reset(struct cl_wrs_db *wrs_db, struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params);
+void cl_wrs_tables_build(struct cl_hw *cl_hw, struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params);
+u16 cl_wrs_tables_find_rate_idx(struct cl_wrs_params *wrs_params,
+ u8 bw, u8 nss, u8 mcs, u8 gi);
+void cl_wrs_init(struct cl_hw *cl_hw);
+void cl_wrs_lock_bh(struct cl_wrs_db *wrs_db);
+void cl_wrs_unlock_bh(struct cl_wrs_db *wrs_db);
+void cl_wrs_lock(struct cl_wrs_db *wrs_db);
+void cl_wrs_unlock(struct cl_wrs_db *wrs_db);
+void cl_wrs_fixed_rate_set(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params,
+ u8 is_fixed_rate, u8 mode, u8 bw, u8 nss, u8 mcs, u8 gi, bool mu_valid);
+void cl_wrs_rate_param_sync(struct cl_wrs_db *wrs_db, struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params);
+void cl_wrs_rate_params_update(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params,
+ u16 new_rate_idx, bool is_sync_required, bool mu_valid);
+void cl_wrs_decision_make(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params,
+ enum cl_wrs_decision decision, u16 new_rate_idx);
+void cl_wrs_decision_update(struct cl_wrs_db *wrs_db, struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params, enum cl_wrs_decision decision,
+ u16 new_rate_idx);
+void cl_wrs_quick_down_check(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params);
+bool cl_wrs_up_mcs1(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params);
+void cl_wrs_rate_param_set(struct cl_hw *cl_hw, struct cl_wrs_sta *wrs_sta,
+ struct cl_wrs_params *wrs_params,
+ struct cl_wrs_rate_params *rate_params,
+ struct cl_wrs_rate *rate_fallback,
+ bool mu_mimo_valid, bool set_su);
+s8 cl_wrs_rssi_eq_calc(struct cl_hw *cl_hw, struct cl_wrs_sta *wrs_sta,
+ bool read_clear, s8 *sorted_rssi);
+void cl_wrs_cntrs_reset(struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params);
+struct cl_wrs_info *cl_wrs_info_get(struct cl_sta *cl_sta, u8 type);
+struct cl_wrs_params *cl_wrs_params_get(struct cl_wrs_sta *wrs_sta, u8 type);
+void cl_wrs_update_rx_rate(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct hw_rxhdr *rxhdr);
+bool cl_wrs_set_rate_idle(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db,
+ struct cl_wrs_sta *wrs_sta, struct cl_wrs_params *wrs_params);
+struct cl_wrs_rate_params *cl_wrs_rx_rate_get(struct cl_sta *cl_sta);
+void cl_wrs_rx_rate_idle_reset(struct cl_wrs_params *rx_params);
+void cl_wrs_ap_capab_set(struct cl_hw *cl_hw, u8 bw, u8 use_sgi);
+void cl_wrs_ap_capab_modify_bw(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db, u8 max_bw);
+void cl_wrs_ap_capab_modify_gi(struct cl_hw *cl_hw, struct cl_wrs_db *wrs_db, u8 use_sgi);
+void cl_wrs_ap_capab_update(struct cl_hw *cl_hw, u8 bw, u8 use_sgi);
+
+/* Driver --> WRS */
+void cl_wrs_api_init(struct cl_hw *cl_hw);
+void cl_wrs_api_close(struct cl_hw *cl_hw);
+void cl_wrs_api_sta_add(struct cl_hw *cl_hw, struct ieee80211_sta *sta);
+void cl_wrs_api_sta_remove(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_wrs_api_bss_set_bw(struct cl_hw *cl_hw, u8 bw);
+void cl_wrs_api_bss_set_sgi(struct cl_hw *cl_hw, u8 use_sgi);
+bool cl_wrs_api_bss_is_sgi_en(struct cl_hw *cl_hw);
+void cl_wrs_api_nss_or_bw_changed(struct cl_hw *cl_hw, struct ieee80211_sta *sta, u8 nss, u8 bw);
+void cl_wrs_api_he_minrate_changed(struct cl_sta *cl_sta, u8 he_minrate);
+void cl_wrs_api_recovery(struct cl_hw *cl_hw);
+void cl_wrs_api_beamforming_sync(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_wrs_api_quick_down_check(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_wrs_params *wrs_params);
+void cl_wrs_api_rate_sync(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_wrs_params *wrs_params);
+bool cl_wrs_api_up_mcs1(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_wrs_params *wrs_params);
+void cl_wrs_api_set_smps_mode(struct cl_hw *cl_hw, struct ieee80211_sta *sta, const u8 bw);
+u16 cl_wrs_api_get_tx_sta_data_rate(struct cl_sta *cl_sta);
+void cl_wrs_api_bss_capab_update(struct cl_hw *cl_hw, u8 bw, u8 use_sgi);
+void cl_wrs_fill_sinfo_rates(struct rate_info *rate_info,
+ const struct cl_wrs_params *wrs_params,
+ const struct cl_sta *sta);
+
+#endif /* CL_WRS_H */
--
2.36.1


2022-05-24 14:52:08

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 66/96] cl8k: add rfic.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/rfic.c | 232 ++++++++++++++++++++++++
1 file changed, 232 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/rfic.c

diff --git a/drivers/net/wireless/celeno/cl8k/rfic.c b/drivers/net/wireless/celeno/cl8k/rfic.c
new file mode 100644
index 000000000000..5a3a595694b5
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/rfic.c
@@ -0,0 +1,232 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/namei.h>
+
+#include "reg/reg_defs.h"
+#include "phy.h"
+#include "utils.h"
+#include "rfic.h"
+
+#define PHY_DEV_NON_PHYSICAL 0x0
+#define MAX_LOOPS 15
+#define READ_REQ 1
+#define WRITE_REQ 0
+
+int cl_spi_read(struct cl_hw *cl_hw, u8 page, u8 addr, u8 *val)
+{
+ struct mm_spi_read_cfm *cfm;
+ int ret = cl_msg_tx_spi_read(cl_hw, page, addr);
+
+ if (ret)
+ goto out;
+
+ cfm = (struct mm_spi_read_cfm *)(cl_hw->msg_cfm_params[MM_SPI_READ_CFM]);
+ if (!cfm) {
+ ret = -ENOMSG;
+ goto out;
+ }
+
+ if (cfm->status == 0) {
+ *val = cfm->val;
+ } else {
+ cl_dbg_err(cl_hw, "SPI read failed\n");
+ *val = 0;
+ }
+
+ cl_msg_tx_free_cfm_params(cl_hw, MM_SPI_READ_CFM);
+ return 0;
+
+out:
+ *val = 0;
+ return ret;
+}
+
+static int _cl_spi_driver_read(struct cl_hw *cl_hw, u8 more, u8 addr, u8 *val)
+{
+ u8 prescaler = 4;
+ int loops = MAX_LOOPS;
+
+ riu_rc_sw_ctrl_pack(cl_hw, 1, more, 0, 0, 1, 1, 1, 1, prescaler, READ_REQ, addr, 0xFF);
+ while (riu_rc_sw_ctrl_start_done_getf(cl_hw) && --loops)
+ ;
+
+ if (!loops) {
+ cl_dbg_verbose(cl_hw, "Read error - addr [0x%02x]\n", addr);
+ return -EBADE;
+ }
+
+ *val = riu_rc_sw_ctrl_data_getf(cl_hw);
+
+ hwreg_pr(cl_hw, "more=%d, addr=0x%x, *val=0x%x\n", more, addr, *val);
+
+ return 0;
+}
+
+static int _cl_spi_driver_write(struct cl_hw *cl_hw, u8 more, u8 addr, u8 val)
+{
+ u8 prescaler = 4;
+ int loops = MAX_LOOPS;
+
+ hwreg_pr(cl_hw, "more=%d, addr=0x%x, val=0x%x\n", more, addr, val);
+
+ riu_rc_sw_ctrl_pack(cl_hw, 1, more, 0, 0, 1, 1, 1, 1, prescaler, WRITE_REQ, addr, val);
+
+ while (riu_rc_sw_ctrl_start_done_getf(cl_hw) && --loops)
+ ;
+
+ if (!loops) {
+ cl_dbg_verbose(cl_hw, "Write error - addr [0x%02x] val [0x%02x]\n", addr, val);
+ return -EBADE;
+ }
+
+ return 0;
+}
+
+int cl_spi_driver_read_byte(struct cl_hw *cl_hw, u8 page, u8 addr, u8 *val)
+{
+ /* cl_spi_driver_read_byte should only be used when mac fw not loaded,
+ * else use cl_spi_read
+ */
+ int ret = 0;
+
+ spin_lock_bh(&cl_hw->chip->isr_lock);
+
+ ret = _cl_spi_driver_write(cl_hw, 1, 0x03, page);
+ if (ret)
+ goto read_exit;
+
+ ret = _cl_spi_driver_read(cl_hw, 0, addr, val);
+ if (ret)
+ goto read_exit;
+
+read_exit:
+ spin_unlock_bh(&cl_hw->chip->isr_lock);
+
+ return ret;
+}
+
+static u8 cl_rfic_str_to_cmd(struct cl_hw *cl_hw, char *str)
+{
+ if (!strcmp(str, "DONE"))
+ return OVERWRITE_DONE;
+ else if (!strcmp(str, "SPI_R"))
+ return SPI_RD_CMD;
+ else if (!strcmp(str, "SPI_W"))
+ return SPI_WR_CMD;
+ else if (!strcmp(str, "GCU_W"))
+ return GCU_WR_CMD;
+ else if (!strcmp(str, "RIU_W"))
+ return RIU_WR_CMD;
+ else if (!strcmp(str, "GEN_W"))
+ return GEN_WR_CMD;
+ else if (!strcmp(str, "DELAY"))
+ return UDELAY_CMD;
+
+ cl_dbg_err(cl_hw, "unknown command %s\n", str);
+ return OVERWRITE_DONE;
+}
+
+static void cl_parse_rf_command(struct cl_hw *cl_hw, char *str,
+ struct cl_rf_reg_overwrite_info *info)
+{
+ int i = 0;
+ char *ptr = NULL;
+ u32 res = 0;
+
+ while ((ptr = strsep(&str, " ")) && (*ptr != '\n')) {
+ if (i == 0) {
+ info->cmd = cl_rfic_str_to_cmd(cl_hw, ptr);
+ } else {
+ if (kstrtou32(ptr, 16, &res) != 0) {
+ pr_err("%s: invalid data - %s\n", __func__, ptr);
+ return;
+ }
+
+ info->data[i - 1] = cpu_to_le32(res);
+ res = 0;
+ }
+ i++;
+ }
+}
+
+#define RF_CMD_MAX_LEN 64
+
+static void cl_parse_rf_commands_from_buf(struct cl_hw *cl_hw, char *buf, loff_t size,
+ struct cl_rf_reg_overwrite_info *info)
+{
+ int i = 0;
+ char *line = buf;
+ char str[RF_CMD_MAX_LEN];
+ char *end;
+ int line_length = 0;
+
+ while (line && (line != (buf + size))) {
+ if ((*line == '#') || (*line == '\n')) {
+ /* Skip comment or blank line */
+ line = strstr(line, "\n") + 1;
+ } else if (*line) {
+ end = strstr(line, "\n") + 1;
+ line_length = end - line;
+
+ if (line_length >= RF_CMD_MAX_LEN) {
+ cl_dbg_err(cl_hw, "Command too long (%u)\n", line_length);
+ return;
+ }
+
+ snprintf(str, line_length, "%s", line);
+ cl_parse_rf_command(cl_hw, str, &info[i++]);
+ line += line_length;
+ }
+ }
+}
+
+int cl_rfic_read_overwrite_file(struct cl_hw *cl_hw, struct cl_rf_reg_overwrite_info *info,
+ bool init)
+{
+ char *buf = NULL;
+ size_t size = 0;
+ char filename[CL_FILENAME_MAX] = {0};
+ char path_name[CL_PATH_MAX] = {0};
+ struct path path;
+
+ if (init)
+ snprintf(filename, sizeof(filename), "rf_init_overwrite.txt");
+ else
+ snprintf(filename, sizeof(filename), "rf_tcv%d_overwrite.txt", cl_hw->tcv_idx);
+
+ snprintf(path_name, sizeof(path_name), "/lib/firmware/cl8k/%s", filename);
+ if (kern_path(path_name, LOOKUP_FOLLOW, &path) < 0)
+ return 0;
+
+ size = cl_file_open_and_read(cl_hw->chip, filename, &buf);
+
+ if (!buf)
+ return 0;
+
+ cl_dbg_trace(cl_hw, "parsing %s !!!\n", filename);
+ cl_parse_rf_commands_from_buf(cl_hw, buf, size, info);
+ kfree(buf);
+ return 0;
+}
+
+static u8 cl_rfic_version(struct cl_hw *cl_hw)
+{
+ u8 value = 0xff;
+ int ret = cl_spi_driver_read_byte(cl_hw, 0, 0, &value);
+
+ if (ret < 0)
+ cl_dbg_err(cl_hw, "%s: spi read failed", __func__);
+
+ return value;
+}
+
+void cl_chip_set_rfic_version(struct cl_hw *cl_hw)
+{ /* Read version only on a physical phy */
+ if (cl_hw->chip->conf->ci_phy_dev == PHY_DEV_ATHOS ||
+ cl_hw->chip->conf->ci_phy_dev == PHY_DEV_OLYMPUS) {
+ cl_hw->chip->rfic_version = cl_rfic_version(cl_hw);
+ } else {
+ cl_hw->chip->rfic_version = PHY_DEV_NON_PHYSICAL;
+ }
+}
--
2.36.1


2022-05-24 14:56:10

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 58/96] cl8k: add rates.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/rates.c | 1570 ++++++++++++++++++++++
1 file changed, 1570 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/rates.c

diff --git a/drivers/net/wireless/celeno/cl8k/rates.c b/drivers/net/wireless/celeno/cl8k/rates.c
new file mode 100644
index 000000000000..8f21f3d6ff84
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/rates.c
@@ -0,0 +1,1570 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "tx.h"
+#include "bf.h"
+#include "utils.h"
+#include "hw.h"
+#include "rates.h"
+
+/*
+ * This table of rates was taken from IEEE 802.11ax Draft v3.3, 28.5. Parameters
+ * for HE-HE_MCSs. The units are 1/10 Mbs. Note that we don't support DCM, so it is
+ * not taken into account in this table.
+ */
+const u16 data_rate_he_x10[CHNL_BW_MAX][WRS_SS_MAX][WRS_MCS_MAX_HE][WRS_GI_MAX_HE] = {
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 73,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 81,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_0][WRS_GI_VSHORT] = 86,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 146,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 163,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_1][WRS_GI_VSHORT] = 172,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 219,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 244,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_2][WRS_GI_VSHORT] = 258,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 293,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 325,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_3][WRS_GI_VSHORT] = 344,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 439,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 488,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_4][WRS_GI_VSHORT] = 516,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 585,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 650,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_5][WRS_GI_VSHORT] = 688,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 658,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 731,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_6][WRS_GI_VSHORT] = 774,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 731,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 813,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_7][WRS_GI_VSHORT] = 860,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 878,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 975,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_8][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 975,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 1083,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_9][WRS_GI_VSHORT] = 1147,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_10][WRS_GI_LONG] = 1097,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_10][WRS_GI_SHORT] = 1219,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_10][WRS_GI_VSHORT] = 1290,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_11][WRS_GI_LONG] = 1219,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_11][WRS_GI_SHORT] = 1354,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_11][WRS_GI_VSHORT] = 1434,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 146,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 163,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_0][WRS_GI_VSHORT] = 172,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 293,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 325,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_1][WRS_GI_VSHORT] = 344,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 439,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 488,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_2][WRS_GI_VSHORT] = 516,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 585,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 650,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_3][WRS_GI_VSHORT] = 688,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 878,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 975,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_4][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 1170,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_5][WRS_GI_VSHORT] = 1376,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 1316,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 1463,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_6][WRS_GI_VSHORT] = 1549,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 1463,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 1625,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_7][WRS_GI_VSHORT] = 1721,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 1755,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_8][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 1950,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 2167,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_9][WRS_GI_VSHORT] = 2294,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_10][WRS_GI_LONG] = 2194,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_10][WRS_GI_SHORT] = 2438,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_10][WRS_GI_VSHORT] = 2581,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_11][WRS_GI_LONG] = 2438,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_11][WRS_GI_SHORT] = 2708,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_11][WRS_GI_VSHORT] = 2868,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 219,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 244,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_0][WRS_GI_VSHORT] = 258,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 439,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 488,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_1][WRS_GI_VSHORT] = 516,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 658,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 731,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_2][WRS_GI_VSHORT] = 774,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 878,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 975,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_3][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 1316,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 1463,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_4][WRS_GI_VSHORT] = 1549,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 1755,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_5][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 1974,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 2194,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_6][WRS_GI_VSHORT] = 2323,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 2194,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 2438,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_7][WRS_GI_VSHORT] = 2581,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 2633,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 2925,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_8][WRS_GI_VSHORT] = 3097,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 2925,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 3250,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_9][WRS_GI_VSHORT] = 3441,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_10][WRS_GI_LONG] = 3291,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_10][WRS_GI_SHORT] = 3656,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_10][WRS_GI_VSHORT] = 3871,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_11][WRS_GI_LONG] = 3656,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_11][WRS_GI_SHORT] = 4063,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_11][WRS_GI_VSHORT] = 4301,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 293,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 325,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_0][WRS_GI_VSHORT] = 344,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 585,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 650,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_1][WRS_GI_VSHORT] = 688,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 878,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 975,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_2][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 1170,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_3][WRS_GI_VSHORT] = 1376,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 1755,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_4][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 2340,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_5][WRS_GI_VSHORT] = 2753,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 2633,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 2925,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_6][WRS_GI_VSHORT] = 3097,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 2925,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 3250,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_7][WRS_GI_VSHORT] = 3441,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 3510,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_8][WRS_GI_VSHORT] = 4129,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 3900,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 4333,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_9][WRS_GI_VSHORT] = 4588,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_10][WRS_GI_LONG] = 4388,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_10][WRS_GI_SHORT] = 4875,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_10][WRS_GI_VSHORT] = 5162,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_11][WRS_GI_LONG] = 4875,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_11][WRS_GI_SHORT] = 5417,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_11][WRS_GI_VSHORT] = 5735,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 146,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 163,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_0][WRS_GI_VSHORT] = 172,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 293,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 325,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_1][WRS_GI_VSHORT] = 344,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 439,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 488,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_2][WRS_GI_VSHORT] = 516,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 585,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 650,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_3][WRS_GI_VSHORT] = 688,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 878,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 975,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_4][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 1170,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_5][WRS_GI_VSHORT] = 1376,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 1316,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 1463,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_6][WRS_GI_VSHORT] = 1549,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 1463,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 1625,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_7][WRS_GI_VSHORT] = 1721,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 1755,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_8][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 1950,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 2167,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_9][WRS_GI_VSHORT] = 2294,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_10][WRS_GI_LONG] = 2194,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_10][WRS_GI_SHORT] = 2438,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_10][WRS_GI_VSHORT] = 2581,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_11][WRS_GI_LONG] = 2438,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_11][WRS_GI_SHORT] = 2708,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_11][WRS_GI_VSHORT] = 2868,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 293,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 325,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_0][WRS_GI_VSHORT] = 344,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 585,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 650,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_1][WRS_GI_VSHORT] = 688,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 878,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 975,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_2][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 1170,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_3][WRS_GI_VSHORT] = 1376,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 1755,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_4][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 2340,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_5][WRS_GI_VSHORT] = 2753,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 2633,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 2925,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_6][WRS_GI_VSHORT] = 3097,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 2925,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 3250,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_7][WRS_GI_VSHORT] = 3441,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 3510,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_8][WRS_GI_VSHORT] = 4129,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 3900,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 4333,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_9][WRS_GI_VSHORT] = 4588,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_10][WRS_GI_LONG] = 4388,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_10][WRS_GI_SHORT] = 4875,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_10][WRS_GI_VSHORT] = 5162,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_11][WRS_GI_LONG] = 4875,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_11][WRS_GI_SHORT] = 5417,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_11][WRS_GI_VSHORT] = 5735,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 439,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 488,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_0][WRS_GI_VSHORT] = 516,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 878,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 975,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_1][WRS_GI_VSHORT] = 1032,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 1316,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 1463,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_2][WRS_GI_VSHORT] = 1549,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 1755,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_3][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 2633,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 2925,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_4][WRS_GI_VSHORT] = 3097,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 3510,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_5][WRS_GI_VSHORT] = 4129,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 3949,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 4388,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_6][WRS_GI_VSHORT] = 4646,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 4388,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 4875,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_7][WRS_GI_VSHORT] = 5162,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 5265,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 5850,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_8][WRS_GI_VSHORT] = 6194,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 5850,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 6500,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_9][WRS_GI_VSHORT] = 6882,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_10][WRS_GI_LONG] = 6581,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_10][WRS_GI_SHORT] = 7313,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_10][WRS_GI_VSHORT] = 7743,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_11][WRS_GI_LONG] = 7313,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_11][WRS_GI_SHORT] = 8125,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_11][WRS_GI_VSHORT] = 8603,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 585,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 650,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_0][WRS_GI_VSHORT] = 688,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 1170,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_1][WRS_GI_VSHORT] = 1376,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 1755,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_2][WRS_GI_VSHORT] = 2065,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 2340,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_3][WRS_GI_VSHORT] = 2753,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 3510,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_4][WRS_GI_VSHORT] = 4129,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 4680,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 5200,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_5][WRS_GI_VSHORT] = 5506,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 5265,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 5850,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_6][WRS_GI_VSHORT] = 6194,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 5850,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 6500,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_7][WRS_GI_VSHORT] = 6882,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 7020,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_8][WRS_GI_VSHORT] = 8259,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 7800,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 8667,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_9][WRS_GI_VSHORT] = 9176,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_10][WRS_GI_LONG] = 8775,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_10][WRS_GI_SHORT] = 9750,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_10][WRS_GI_VSHORT] = 10324,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_11][WRS_GI_LONG] = 9750,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_11][WRS_GI_SHORT] = 10833,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_11][WRS_GI_VSHORT] = 11471,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 306,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 340,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_0][WRS_GI_VSHORT] = 360,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 613,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 681,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_1][WRS_GI_VSHORT] = 721,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 919,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 1021,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_2][WRS_GI_VSHORT] = 1081,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 1225,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 1361,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_3][WRS_GI_VSHORT] = 1441,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 1838,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 2042,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_4][WRS_GI_VSHORT] = 2162,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 2450,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 2722,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_5][WRS_GI_VSHORT] = 2882,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 2756,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 3063,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_6][WRS_GI_VSHORT] = 3243,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 3063,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 3403,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_7][WRS_GI_VSHORT] = 3603,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 3675,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_8][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 4083,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 4537,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_9][WRS_GI_VSHORT] = 4804,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_10][WRS_GI_LONG] = 4594,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_10][WRS_GI_SHORT] = 5104,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_10][WRS_GI_VSHORT] = 5404,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_11][WRS_GI_LONG] = 5104,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_11][WRS_GI_SHORT] = 5671,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_11][WRS_GI_VSHORT] = 6004,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 613,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 681,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_0][WRS_GI_VSHORT] = 721,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 1225,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 1361,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_1][WRS_GI_VSHORT] = 1441,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 1838,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 2042,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_2][WRS_GI_VSHORT] = 2162,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 2450,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 2722,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_3][WRS_GI_VSHORT] = 2882,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 3675,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_4][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 4900,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 5444,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_5][WRS_GI_VSHORT] = 5765,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 5513,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 6125,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_6][WRS_GI_VSHORT] = 6485,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 6125,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 6806,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_7][WRS_GI_VSHORT] = 7206,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 7350,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_8][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 8166,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 9074,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_9][WRS_GI_VSHORT] = 9607,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_10][WRS_GI_LONG] = 9188,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_10][WRS_GI_SHORT] = 10208,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_10][WRS_GI_VSHORT] = 10809,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_11][WRS_GI_LONG] = 10208,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_11][WRS_GI_SHORT] = 11343,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_11][WRS_GI_VSHORT] = 12010,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 919,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 1021,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_0][WRS_GI_VSHORT] = 1081,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 1838,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 2042,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_1][WRS_GI_VSHORT] = 2162,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 2756,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 3063,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_2][WRS_GI_VSHORT] = 3243,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 3675,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_3][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 5513,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 6125,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_4][WRS_GI_VSHORT] = 6485,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 7350,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_5][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 8269,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 9188,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_6][WRS_GI_VSHORT] = 9728,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 9188,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 10208,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_7][WRS_GI_VSHORT] = 10809,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 11025,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 12250,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_8][WRS_GI_VSHORT] = 12971,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 12250,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 13611,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_9][WRS_GI_VSHORT] = 14412,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_10][WRS_GI_LONG] = 13781,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_10][WRS_GI_SHORT] = 15313,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_10][WRS_GI_VSHORT] = 16213,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_11][WRS_GI_LONG] = 15313,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_11][WRS_GI_SHORT] = 17014,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_11][WRS_GI_VSHORT] = 18015,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 1225,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 1361,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_0][WRS_GI_VSHORT] = 1441,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 2450,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 2722,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_1][WRS_GI_VSHORT] = 2882,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 3675,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_2][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 4900,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 5444,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_3][WRS_GI_VSHORT] = 5765,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 7350,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_4][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 9800,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 10889,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_5][WRS_GI_VSHORT] = 11529,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 11025,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 12250,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_6][WRS_GI_VSHORT] = 12971,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 12250,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 13611,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_7][WRS_GI_VSHORT] = 14412,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 14700,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 16333,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_8][WRS_GI_VSHORT] = 17294,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 16333,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 18148,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_9][WRS_GI_VSHORT] = 19215,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_10][WRS_GI_LONG] = 18375,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_10][WRS_GI_SHORT] = 20417,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_10][WRS_GI_VSHORT] = 21618,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_11][WRS_GI_LONG] = 20416,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_11][WRS_GI_SHORT] = 22685,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_11][WRS_GI_VSHORT] = 24019,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 613,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 681,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_0][WRS_GI_VSHORT] = 721,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 1225,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 1361,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_1][WRS_GI_VSHORT] = 1441,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 1838,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 2042,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_2][WRS_GI_VSHORT] = 2162,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 2450,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 2722,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_3][WRS_GI_VSHORT] = 2882,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 3675,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_4][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 4900,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 5444,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_5][WRS_GI_VSHORT] = 5765,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 5513,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 6125,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_6][WRS_GI_VSHORT] = 6485,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 6125,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 6806,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_7][WRS_GI_VSHORT] = 7206,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 7350,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_8][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 8166,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 9074,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_9][WRS_GI_VSHORT] = 9607,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_10][WRS_GI_LONG] = 9188,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_10][WRS_GI_SHORT] = 10208,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_10][WRS_GI_VSHORT] = 10809,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_11][WRS_GI_LONG] = 10208,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_11][WRS_GI_SHORT] = 11342,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_11][WRS_GI_VSHORT] = 12010,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 1225,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 1361,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_0][WRS_GI_VSHORT] = 1441,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 2450,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 2722,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_1][WRS_GI_VSHORT] = 2882,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 3675,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_2][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 4900,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 5444,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_3][WRS_GI_VSHORT] = 5765,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 7350,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_4][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 9800,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 10889,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_5][WRS_GI_VSHORT] = 11529,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 11025,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 12250,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_6][WRS_GI_VSHORT] = 12971,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 12250,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 13611,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_7][WRS_GI_VSHORT] = 14412,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 14700,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 16333,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_8][WRS_GI_VSHORT] = 17294,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 16333,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 18148,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_9][WRS_GI_VSHORT] = 19215,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_10][WRS_GI_LONG] = 18375,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_10][WRS_GI_SHORT] = 20417,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_10][WRS_GI_VSHORT] = 21618,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_11][WRS_GI_LONG] = 20416,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_11][WRS_GI_SHORT] = 22685,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_11][WRS_GI_VSHORT] = 24019,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 1838,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 2042,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_0][WRS_GI_VSHORT] = 2162,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 3675,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 4083,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_1][WRS_GI_VSHORT] = 4324,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 5513,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 6125,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_2][WRS_GI_VSHORT] = 6485,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 7350,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_3][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 11025,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 12250,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_4][WRS_GI_VSHORT] = 12971,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 14700,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 16333,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_5][WRS_GI_VSHORT] = 17294,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 16538,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 18375,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_6][WRS_GI_VSHORT] = 19456,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 18375,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 20417,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_7][WRS_GI_VSHORT] = 21618,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 22050,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 24500,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_8][WRS_GI_VSHORT] = 25941,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 24500,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 27222,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_9][WRS_GI_VSHORT] = 28824,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_10][WRS_GI_LONG] = 27563,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_10][WRS_GI_SHORT] = 30625,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_10][WRS_GI_VSHORT] = 32426,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_11][WRS_GI_LONG] = 30625,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_11][WRS_GI_SHORT] = 34028,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_11][WRS_GI_VSHORT] = 36029,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 2450,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 2722,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_0][WRS_GI_VSHORT] = 2882,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 4900,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 5444,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_1][WRS_GI_VSHORT] = 5765,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 7350,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 8167,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_2][WRS_GI_VSHORT] = 8647,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 9800,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 10889,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_3][WRS_GI_VSHORT] = 11529,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 14700,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 16333,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_4][WRS_GI_VSHORT] = 17294,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 19600,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 21778,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_5][WRS_GI_VSHORT] = 23059,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 22050,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 24500,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_6][WRS_GI_VSHORT] = 25941,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 24500,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 27222,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_7][WRS_GI_VSHORT] = 28824,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 29400,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 32667,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_8][WRS_GI_VSHORT] = 34588,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 32666,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 36296,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_9][WRS_GI_VSHORT] = 38431,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_10][WRS_GI_LONG] = 36750,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_10][WRS_GI_SHORT] = 40833,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_10][WRS_GI_VSHORT] = 43235,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_11][WRS_GI_LONG] = 40833,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_11][WRS_GI_SHORT] = 45370,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_11][WRS_GI_VSHORT] = 48039,
+};
+
+/*
+ * This table of rates was taken from IEEE Std 802.11TM-2016, 21.5 Parameters
+ * for VHT-MCSs. The units are 1/10 Mbs. Invalid combinations are with 0's. Note
+ * that HT data rates are a subset of VHT data rates, so we can use a single
+ * table for both.
+ */
+const u16 data_rate_ht_vht_x10[CHNL_BW_MAX][WRS_SS_MAX][WRS_MCS_MAX_VHT][WRS_GI_MAX_VHT] = {
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 65,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 72,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 130,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 144,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 195,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 217,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 260,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 289,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 390,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 433,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 520,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 578,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 585,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 650,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 650,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 722,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 780,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 867,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 0,
+ [CHNL_BW_20][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 0,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 130,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 144,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 260,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 289,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 390,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 433,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 520,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 578,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 780,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 867,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 1040,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 1156,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 1170,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 1303,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 1300,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 1444,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 1560,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 1733,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 0,
+ [CHNL_BW_20][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 0,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 195,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 217,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 390,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 433,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 585,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 650,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 780,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 867,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 1170,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 1560,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 1733,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 1755,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 1950,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 2167,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 2340,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 2600,
+ [CHNL_BW_20][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 2889,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 260,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 288,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 520,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 576,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 780,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 868,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 1040,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 1156,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 1560,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 1732,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 2080,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 2312,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 2340,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 2600,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 2888,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 3120,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 3468,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 0,
+ [CHNL_BW_20][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 0,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 135,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 150,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 270,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 300,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 405,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 450,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 540,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 600,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 810,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 900,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 1080,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 1200,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 1215,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 1350,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 1350,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 1500,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 1620,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 1800,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 1800,
+ [CHNL_BW_40][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 2000,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 270,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 300,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 540,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 600,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 810,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 900,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 1080,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 1200,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 1620,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 1800,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 2160,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 2400,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 2430,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 2700,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 2700,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 3000,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 3240,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 3600,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 3600,
+ [CHNL_BW_40][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 4000,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 405,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 450,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 810,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 900,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 1215,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 1350,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 1620,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 1800,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 2430,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 2700,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 3240,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 3600,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 3645,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 4050,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 4050,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 4500,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 4860,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 5400,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 5400,
+ [CHNL_BW_40][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 6000,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 540,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 600,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 1080,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 1200,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 1620,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 1800,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 2160,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 2400,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 3240,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 3600,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 4320,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 4800,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 4860,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 5400,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 5400,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 6000,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 6480,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 7200,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 7200,
+ [CHNL_BW_40][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 8000,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 293,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 325,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 585,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 650,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 878,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 975,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 1170,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 1755,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 2340,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 2633,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 2925,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 2925,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 3250,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 3510,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 3900,
+ [CHNL_BW_80][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 4333,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 585,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 650,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 1170,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 1755,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 2340,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 3510,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 4680,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 5200,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 5265,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 5850,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 5850,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 6500,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 7020,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 7800,
+ [CHNL_BW_80][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 8667,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 878,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 975,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 1755,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 2633,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 2925,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 3510,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 5265,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 5850,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 7020,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 0,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 0,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 8775,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 9750,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 10530,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 11700,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 11700,
+ [CHNL_BW_80][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 13000,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 1172,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 2340,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 3512,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 4680,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 5200,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 7020,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 9360,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 10400,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 10532,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 11700,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 11700,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 13000,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 14040,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 15600,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 15600,
+ [CHNL_BW_80][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 17332,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_0][WRS_GI_LONG] = 585,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_0][WRS_GI_SHORT] = 650,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_1][WRS_GI_LONG] = 1170,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_1][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_2][WRS_GI_LONG] = 1755,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_2][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_3][WRS_GI_LONG] = 2340,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_3][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_4][WRS_GI_LONG] = 3510,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_4][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_5][WRS_GI_LONG] = 4680,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_5][WRS_GI_SHORT] = 5200,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_6][WRS_GI_LONG] = 5265,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_6][WRS_GI_SHORT] = 5850,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_7][WRS_GI_LONG] = 5850,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_7][WRS_GI_SHORT] = 6500,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_8][WRS_GI_LONG] = 7020,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_8][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_9][WRS_GI_LONG] = 7800,
+ [CHNL_BW_160][WRS_SS_1][WRS_MCS_9][WRS_GI_SHORT] = 8667,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_0][WRS_GI_LONG] = 1170,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_0][WRS_GI_SHORT] = 1300,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_1][WRS_GI_LONG] = 2340,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_1][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_2][WRS_GI_LONG] = 3510,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_2][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_3][WRS_GI_LONG] = 4680,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_3][WRS_GI_SHORT] = 5200,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_4][WRS_GI_LONG] = 7020,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_4][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_5][WRS_GI_LONG] = 9360,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_5][WRS_GI_SHORT] = 10400,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_6][WRS_GI_LONG] = 10530,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_6][WRS_GI_SHORT] = 11700,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_7][WRS_GI_LONG] = 11700,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_7][WRS_GI_SHORT] = 13000,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_8][WRS_GI_LONG] = 14040,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_8][WRS_GI_SHORT] = 15600,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_9][WRS_GI_LONG] = 15600,
+ [CHNL_BW_160][WRS_SS_2][WRS_MCS_9][WRS_GI_SHORT] = 17333,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_0][WRS_GI_LONG] = 1755,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_0][WRS_GI_SHORT] = 1950,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_1][WRS_GI_LONG] = 3510,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_1][WRS_GI_SHORT] = 3900,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_2][WRS_GI_LONG] = 5265,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_2][WRS_GI_SHORT] = 5850,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_3][WRS_GI_LONG] = 7020,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_3][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_4][WRS_GI_LONG] = 10530,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_4][WRS_GI_SHORT] = 11700,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_5][WRS_GI_LONG] = 14040,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_5][WRS_GI_SHORT] = 15600,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_6][WRS_GI_LONG] = 15795,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_6][WRS_GI_SHORT] = 17550,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_7][WRS_GI_LONG] = 17550,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_7][WRS_GI_SHORT] = 19500,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_8][WRS_GI_LONG] = 21060,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_8][WRS_GI_SHORT] = 23400,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_9][WRS_GI_LONG] = 0,
+ [CHNL_BW_160][WRS_SS_3][WRS_MCS_9][WRS_GI_SHORT] = 0,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_0][WRS_GI_LONG] = 2340,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_0][WRS_GI_SHORT] = 2600,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_1][WRS_GI_LONG] = 4680,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_1][WRS_GI_SHORT] = 5200,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_2][WRS_GI_LONG] = 7020,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_2][WRS_GI_SHORT] = 7800,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_3][WRS_GI_LONG] = 9360,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_3][WRS_GI_SHORT] = 10400,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_4][WRS_GI_LONG] = 10400,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_4][WRS_GI_SHORT] = 15600,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_5][WRS_GI_LONG] = 18720,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_5][WRS_GI_SHORT] = 20800,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_6][WRS_GI_LONG] = 21060,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_6][WRS_GI_SHORT] = 23400,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_7][WRS_GI_LONG] = 23400,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_7][WRS_GI_SHORT] = 26000,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_8][WRS_GI_LONG] = 28080,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_8][WRS_GI_SHORT] = 31200,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_9][WRS_GI_LONG] = 31200,
+ [CHNL_BW_160][WRS_SS_4][WRS_MCS_9][WRS_GI_SHORT] = 34667,
+};
+
+/* OFDM Data Rates - (multiplied by 10) */
+const u16 data_rate_ofdm_x10[] = {
+ 60,
+ 90,
+ 120,
+ 180,
+ 240,
+ 360,
+ 480,
+ 540,
+};
+
+/* CCK Data Rates - (multiplied by 10) */
+const u16 data_rate_cck_x10[] = {
+ 10,
+ 20,
+ 55,
+ 110,
+};
+
+struct cl_inverse_data_rate inverse_data_rate;
+
+static u16 cl_data_rates_inverse_he(u8 bw, u8 nss, u8 mcs, u8 gi)
+{
+ return (80 << DATA_RATE_INVERSE_Q) / data_rate_he_x10[bw][nss][mcs][gi];
+}
+
+static u16 cl_data_rates_inverse_vht(u8 bw, u8 nss, u8 mcs, u8 gi)
+{
+ u16 data_rate = data_rate_ht_vht_x10[bw][nss][mcs][gi];
+
+ if (data_rate)
+ return (80 << DATA_RATE_INVERSE_Q) / data_rate;
+
+ return 0;
+}
+
+static u16 cl_data_rates_inverse_ofdm(u8 mcs)
+{
+ return (80 << DATA_RATE_INVERSE_Q) / data_rate_ofdm_x10[mcs];
+}
+
+static u16 cl_data_rates_inverse_cck(u8 mcs)
+{
+ return (80 << DATA_RATE_INVERSE_Q) / data_rate_cck_x10[mcs];
+}
+
+void cl_data_rates_inverse_build(void)
+{
+ /*
+ * The calculation is: round((2^15[Q] * 8[bits] * 10)/rate[Mbps]) - unit (us * 2^15)
+ * multiply by 10 because data rates in the above tables are also multiplied by 10
+ */
+ u8 bw, nss, mcs, gi;
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++)
+ for (nss = 0; nss < WRS_SS_MAX; nss++) {
+ /* HE */
+ for (mcs = 0; mcs < WRS_MCS_MAX_HE; mcs++)
+ for (gi = 0; gi < WRS_GI_MAX_HE; gi++)
+ inverse_data_rate.he[bw][nss][mcs][gi] =
+ cl_data_rates_inverse_he(bw, nss, mcs, gi);
+
+ /* VHT */
+ for (mcs = 0; mcs < WRS_MCS_MAX_VHT; mcs++)
+ for (gi = 0; gi < WRS_GI_MAX_VHT; gi++)
+ inverse_data_rate.ht_vht[bw][nss][mcs][gi] =
+ cl_data_rates_inverse_vht(bw, nss, mcs, gi);
+ }
+
+ /* OFDM */
+ for (mcs = 0; mcs < WRS_MCS_MAX_OFDM; mcs++)
+ inverse_data_rate.ofdm[mcs] = cl_data_rates_inverse_ofdm(mcs);
+
+ /* CCK */
+ for (mcs = 0; mcs < WRS_MCS_MAX_CCK; mcs++)
+ inverse_data_rate.cck[mcs] = cl_data_rates_inverse_cck(mcs);
+}
+
+u16 cl_data_rates_get(u8 mode, u8 bw, u8 nss, u8 mcs, u8 gi)
+{
+ return cl_data_rates_get_x10(mode, bw, nss, mcs, gi) / 10;
+}
+
+u16 cl_data_rates_get_x10(u8 mode, u8 bw, u8 nss, u8 mcs, u8 gi)
+{
+ switch (mode) {
+ case WRS_MODE_HE:
+ return data_rate_he_x10[bw][nss][mcs][gi];
+ case WRS_MODE_VHT:
+ case WRS_MODE_HT:
+ return data_rate_ht_vht_x10[bw][nss][mcs][gi];
+ case WRS_MODE_OFDM:
+ return data_rate_ofdm_x10[mcs];
+ case WRS_MODE_CCK:
+ return data_rate_cck_x10[mcs];
+ default:
+ return 0;
+ }
+}
+
+u32 cl_rate_ctrl_generate(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ u8 mode, u8 bw, u8 nss, u8 mcs, u8 gi,
+ bool fallback_en, bool mu_valid)
+{
+ union cl_rate_ctrl_info rate_ctrl_info;
+
+ rate_ctrl_info.word = 0;
+
+ /* Format_mod + mcs_index */
+ if (mode == WRS_MODE_HE) {
+ rate_ctrl_info.field.mcs_index = (nss << 4) | mcs;
+ rate_ctrl_info.field.format_mod = FORMATMOD_HE_SU;
+ } else if (mode == WRS_MODE_VHT) {
+ rate_ctrl_info.field.mcs_index = (nss << 4) | mcs;
+ rate_ctrl_info.field.format_mod = FORMATMOD_VHT;
+ } else if (mode == WRS_MODE_HT) {
+ rate_ctrl_info.field.mcs_index = (nss << 3) | mcs;
+ rate_ctrl_info.field.format_mod = FORMATMOD_HT_MF;
+ } else if (mode == WRS_MODE_OFDM) {
+ rate_ctrl_info.field.mcs_index = mcs + RATE_CTRL_OFFSET_OFDM;
+ rate_ctrl_info.field.format_mod =
+ (bw == CHNL_BW_20) ? FORMATMOD_NON_HT : FORMATMOD_NON_HT_DUP_OFDM;
+ } else { /* WRS_MODE_CCK */
+ rate_ctrl_info.field.mcs_index = mcs;
+ rate_ctrl_info.field.format_mod = FORMATMOD_NON_HT;
+ }
+
+ /* Gi */
+ rate_ctrl_info.field.gi = cl_convert_gi_format_wrs_to_fw(mode, gi);
+
+ /* Bw */
+ rate_ctrl_info.field.bw = bw;
+
+ /* Fallback */
+ rate_ctrl_info.field.fallback = fallback_en;
+
+ /* Tx_bf */
+ if (!mu_valid && cl_sta && cl_bf_is_on(cl_hw, cl_sta, nss))
+ rate_ctrl_info.field.tx_bf = true;
+
+ /* Pre_type/stbc */
+ if (rate_ctrl_info.field.format_mod == FORMATMOD_NON_HT)
+ rate_ctrl_info.field.pre_type_or_stbc = 1;
+
+ return rate_ctrl_info.word;
+}
+
+void cl_rate_ctrl_convert(union cl_rate_ctrl_info *rate_ctrl_info)
+{
+ u32 format_mod = rate_ctrl_info->field.format_mod;
+
+ /*
+ * Convert gi from firmware format to driver format
+ * !!! Must be done before converting the format mode !!!
+ */
+ rate_ctrl_info->field.gi = cl_convert_gi_format_fw_to_wrs(format_mod,
+ rate_ctrl_info->field.gi);
+
+ /* Convert format_mod from firmware format to WRS format */
+ if (format_mod >= FORMATMOD_HE_SU) {
+ rate_ctrl_info->field.format_mod = WRS_MODE_HE;
+ } else if (format_mod == FORMATMOD_VHT) {
+ rate_ctrl_info->field.format_mod = WRS_MODE_VHT;
+ } else if (format_mod >= FORMATMOD_HT_MF) {
+ rate_ctrl_info->field.format_mod = WRS_MODE_HT;
+ } else if (format_mod == FORMATMOD_NON_HT_DUP_OFDM) {
+ rate_ctrl_info->field.format_mod = WRS_MODE_OFDM;
+ } else {
+ if (rate_ctrl_info->field.mcs_index >= RATE_CTRL_OFFSET_OFDM)
+ rate_ctrl_info->field.format_mod = WRS_MODE_OFDM;
+ else
+ rate_ctrl_info->field.format_mod = WRS_MODE_CCK;
+ }
+}
+
+void cl_rate_ctrl_parse(union cl_rate_ctrl_info *rate_ctrl_info, u8 *nss, u8 *mcs)
+{
+ switch (rate_ctrl_info->field.format_mod) {
+ case WRS_MODE_HE:
+ case WRS_MODE_VHT:
+ *nss = (rate_ctrl_info->field.mcs_index >> 4);
+ *mcs = (rate_ctrl_info->field.mcs_index & 0xF);
+ break;
+ case WRS_MODE_HT:
+ *nss = (rate_ctrl_info->field.mcs_index >> 3);
+ *mcs = (rate_ctrl_info->field.mcs_index & 0x7);
+ break;
+ case WRS_MODE_OFDM:
+ *nss = 0;
+ *mcs = rate_ctrl_info->field.mcs_index - RATE_CTRL_OFFSET_OFDM;
+ break;
+ case WRS_MODE_CCK:
+ *nss = 0;
+ *mcs = rate_ctrl_info->field.mcs_index;
+ break;
+ default:
+ *nss = *mcs = 0;
+ }
+}
+
+void cl_rate_ctrl_set_default(struct cl_hw *cl_hw)
+{
+ u32 rate_ctrl = 0;
+ union cl_rate_ctrl_info_he rate_ctrl_he;
+
+ /* HE default */
+ rate_ctrl_he.word = 0;
+ rate_ctrl_he.field.spatial_conf = RATE_CNTRL_HE_SPATIAL_CONF_DEF;
+ rate_ctrl = cl_rate_ctrl_generate(cl_hw, NULL, WRS_MODE_HE,
+ 0, 0, 0, 0, false, false);
+
+ cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl, 0, 0,
+ RATE_OP_MODE_DEFAULT_HE, 0, 0, LTF_X4, 0, rate_ctrl_he.word);
+
+ /* OFDM default */
+ rate_ctrl = cl_rate_ctrl_generate(cl_hw, NULL, WRS_MODE_OFDM, 0, 0,
+ cl_hw->conf->ce_default_mcs_ofdm, 0, false, false);
+
+ cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl, 0, 0,
+ RATE_OP_MODE_DEFAULT_OFDM, 0, 0, 0, 0, 0);
+
+ /* CCK default */
+ if (cl_band_is_24g(cl_hw)) {
+ rate_ctrl = cl_rate_ctrl_generate(cl_hw, NULL, WRS_MODE_CCK, 0, 0,
+ cl_hw->conf->ce_default_mcs_cck, 0, false, false);
+
+ cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl, 0, 0,
+ RATE_OP_MODE_DEFAULT_CCK, 0, 0, 0, 0, 0);
+ }
+}
+
+void cl_rate_ctrl_set_default_per_he_minrate(struct cl_hw *cl_hw, u8 bw,
+ u8 nss, u8 mcs, u8 gi)
+{
+ union cl_rate_ctrl_info_he rate_ctrl_he;
+ u32 rate_ctrl = 0;
+ u8 ltf = cl_map_gi_to_ltf(WRS_MODE_HE, gi);
+
+ rate_ctrl_he.word = 0;
+ rate_ctrl_he.field.spatial_conf = RATE_CNTRL_HE_SPATIAL_CONF_DEF;
+ rate_ctrl = cl_rate_ctrl_generate(cl_hw, NULL, WRS_MODE_HE, bw,
+ nss, mcs, gi, false, false);
+
+ cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl, 0, 0,
+ RATE_OP_MODE_DEFAULT_HE, 0, 0, ltf,
+ 0, rate_ctrl_he.word);
+
+ cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl, 0, 0,
+ RATE_OP_MODE_MCAST, 0, 0, ltf, 0, 0);
+
+ cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl, 0, 0,
+ RATE_OP_MODE_BCAST, 0, 0, ltf, 0, 0);
+}
+
+bool cl_rate_ctrl_set_mcast(struct cl_hw *cl_hw, u8 mode, u8 mcs)
+{
+ u32 rate_ctrl_mcast = cl_rate_ctrl_generate(cl_hw, NULL, mode, 0, 0, mcs,
+ WRS_GI_LONG, false, false);
+ u8 ltf = cl_map_gi_to_ltf(mode, WRS_GI_LONG);
+
+ if (cl_msg_tx_update_rate_dl(cl_hw, 0xff, rate_ctrl_mcast, 0, 0,
+ RATE_OP_MODE_MCAST, 0, 0, ltf, 0, 0))
+ return false;
+
+ return true;
+}
+
+static u8 cl_rate_ctrl_get_min(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_min_he_en &&
+ cl_hw->wireless_mode == WIRELESS_MODE_HE)
+ return RATE_CTRL_ENTRY_MIN_HE;
+
+ if (cl_hw_mode_is_b_or_bg(cl_hw))
+ return RATE_CTRL_ENTRY_MIN_CCK;
+
+ return RATE_CTRL_ENTRY_MIN_OFDM;
+}
+
+void cl_rate_ctrl_update_desc_single(struct cl_hw *cl_hw, struct tx_host_info *info,
+ struct cl_sw_txhdr *sw_txhdr)
+{
+ struct ieee80211_hdr *mac_hdr = sw_txhdr->hdr80211;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+ bool is_data = ieee80211_is_data(sw_txhdr->fc);
+
+ if (sw_txhdr->cl_sta && is_data) {
+ if (cl_tx_ctrl_is_eapol(tx_info)) {
+ info->rate_ctrl_entry = cl_rate_ctrl_get_min(cl_hw);
+ } else {
+ if (cl_hw->entry_fixed_rate)
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_FIXED_RATE;
+ else
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_STA;
+ }
+ } else {
+ if (sw_txhdr->is_bcn) {
+ info->rate_ctrl_entry = cl_rate_ctrl_get_min(cl_hw);
+ } else if (is_multicast_ether_addr(mac_hdr->addr1) &&
+ !is_broadcast_ether_addr(mac_hdr->addr1)) {
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_MCAST;
+ } else if (is_broadcast_ether_addr(mac_hdr->addr1) &&
+ !cl_hw->entry_fixed_rate) {
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_BCAST;
+ } else {
+ if (cl_hw->entry_fixed_rate && is_data)
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_FIXED_RATE;
+ else
+ info->rate_ctrl_entry = cl_rate_ctrl_get_min(cl_hw);
+ }
+ }
+}
+
+void cl_rate_ctrl_update_desc_agg(struct cl_hw *cl_hw, struct tx_host_info *info)
+{
+ /* For aggregation there are only two options - STA and FIXED_RATE */
+ if (cl_hw->entry_fixed_rate)
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_FIXED_RATE;
+ else
+ info->rate_ctrl_entry = RATE_CTRL_ENTRY_STA;
+}
+
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+
+/*
+ * MIN_MCS | BCAST_MCS
+ * -------------------
+ * 0 - 1 | 0
+ * 2 - 3 | 1
+ * 4 - 5 | 2
+ * 6 - 7 | 3
+ * 8 - 9 | 4
+ * 10 - 11 | 5
+ */
+
+static u8 conv_min_mcs_to_bcast_mcs[WRS_MCS_MAX] = {
+ 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5
+};
+
+static void cl_dyn_bcast_rate_update(struct cl_hw *cl_hw, u8 min_mcs)
+{
+ struct cl_dyn_bcast_rate *dyn_bcast_rate = &cl_hw->dyn_bcast_rate;
+ u8 bcast_mcs = conv_min_mcs_to_bcast_mcs[min_mcs];
+
+ dyn_bcast_rate->sta_min_mcs = min_mcs;
+
+ if (bcast_mcs != dyn_bcast_rate->bcast_mcs)
+ cl_dyn_bcast_rate_set(cl_hw, bcast_mcs);
+}
+
+static struct cl_dyn_bcast_rate cl_dyn_bcast_rate_prepare(struct cl_hw *cl_hw)
+{
+ struct cl_dyn_bcast_rate dyn_bcast_rate;
+
+ memset(&dyn_bcast_rate, 0, sizeof(struct cl_dyn_bcast_rate));
+ dyn_bcast_rate.sta_min_mcs = 0;
+ dyn_bcast_rate.bcast_mcs = conv_min_mcs_to_bcast_mcs[0];
+
+ if (cl_band_is_6g(cl_hw)) {
+ dyn_bcast_rate.wrs_mode = cl_hw->conf->ci_min_he_en ?
+ WRS_MODE_HE : WRS_MODE_OFDM;
+ dyn_bcast_rate.ltf = LTF_X4;
+ } else if (cl_band_is_24g(cl_hw) && cl_hw_mode_is_b_or_bg(cl_hw)) {
+ dyn_bcast_rate.wrs_mode = WRS_MODE_CCK;
+ dyn_bcast_rate.ltf = 0;
+ } else {
+ dyn_bcast_rate.wrs_mode = WRS_MODE_OFDM;
+ dyn_bcast_rate.ltf = 0;
+ }
+
+ return dyn_bcast_rate;
+}
+
+void cl_dyn_bcast_rate_init(struct cl_hw *cl_hw)
+{
+ struct cl_dyn_bcast_rate dyn_bcast_rate;
+
+ dyn_bcast_rate = cl_dyn_bcast_rate_prepare(cl_hw);
+ memcpy(&cl_hw->dyn_bcast_rate, &dyn_bcast_rate, sizeof(struct cl_dyn_bcast_rate));
+}
+
+void cl_dyn_bcast_rate_set(struct cl_hw *cl_hw, u8 bcast_mcs)
+{
+ struct cl_dyn_bcast_rate *dyn_bcast_rate = &cl_hw->dyn_bcast_rate;
+ u8 wrs_mode = dyn_bcast_rate->wrs_mode;
+ u8 ltf = dyn_bcast_rate->ltf;
+ u32 rate_ctrl;
+
+ cl_hw->dyn_bcast_rate.bcast_mcs = bcast_mcs;
+
+ rate_ctrl = cl_rate_ctrl_generate(cl_hw, NULL, wrs_mode, 0, 0, bcast_mcs,
+ 0, false, false);
+ cl_msg_tx_update_rate_dl(cl_hw, U8_MAX, rate_ctrl, 0, 0,
+ RATE_OP_MODE_BCAST, 0, 0, ltf, 0, 0);
+
+ cl_dbg_info(cl_hw, "Broadcast MCS set to %u\n", bcast_mcs);
+}
+
+void cl_dyn_bcast_update(struct cl_hw *cl_hw)
+{
+ struct cl_dyn_bcast_rate dyn_bcast_rate;
+
+ dyn_bcast_rate = cl_dyn_bcast_rate_prepare(cl_hw);
+ if (cl_hw->dyn_bcast_rate.wrs_mode == dyn_bcast_rate.wrs_mode)
+ return;
+
+ memcpy(&cl_hw->dyn_bcast_rate, &dyn_bcast_rate, sizeof(struct cl_dyn_bcast_rate));
+ cl_dyn_bcast_rate_set(cl_hw, cl_hw->dyn_bcast_rate.bcast_mcs);
+}
+
+void cl_dyn_bcast_rate_recovery(struct cl_hw *cl_hw)
+{
+ cl_dyn_bcast_rate_set(cl_hw, cl_hw->dyn_bcast_rate.bcast_mcs);
+}
+
+void cl_dyn_bcast_rate_change(struct cl_hw *cl_hw, struct cl_sta *cl_sta_change,
+ u8 old_mcs, u8 new_mcs)
+{
+ struct cl_dyn_bcast_rate *dyn_bcast_rate = &cl_hw->dyn_bcast_rate;
+ struct cl_sta *cl_sta = NULL;
+ u8 min_mcs = WRS_MCS_MAX - 1;
+ u8 sta_mcs = 0;
+
+ if (!cl_hw->conf->ce_dyn_bcast_rate_en)
+ return;
+
+ if (!cl_sta_change->add_complete)
+ return;
+
+ /* Single station */
+ if (cl_sta_num_bh(cl_hw) == 1) {
+ cl_dyn_bcast_rate_update(cl_hw, new_mcs);
+ return;
+ }
+
+ /*
+ * If this station did not have the minimum mcs,
+ * and the new rate is now below the minimum mcs there is nothing to do
+ */
+ if (old_mcs > dyn_bcast_rate->sta_min_mcs &&
+ new_mcs > dyn_bcast_rate->sta_min_mcs)
+ return;
+
+ /* Multi station - find new minimum MCS of all stations */
+ cl_sta_lock_bh(cl_hw);
+
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.head, list) {
+ sta_mcs = (cl_sta->sta_idx == cl_sta_change->sta_idx) ?
+ new_mcs : cl_sta->wrs_sta.tx_su_params.rate_params.mcs;
+
+ if (sta_mcs < min_mcs) {
+ min_mcs = sta_mcs;
+
+ if (min_mcs == 0)
+ break;
+ }
+ }
+
+ cl_sta_unlock_bh(cl_hw);
+
+ cl_dyn_bcast_rate_update(cl_hw, min_mcs);
+}
+
+void cl_dyn_bcast_rate_update_upon_assoc(struct cl_hw *cl_hw, u8 mcs, u8 num_sta)
+{
+ struct cl_dyn_bcast_rate *dyn_bcast_rate = &cl_hw->dyn_bcast_rate;
+
+ if (!cl_hw->conf->ce_dyn_bcast_rate_en)
+ return;
+
+ if (num_sta == 1 || mcs < dyn_bcast_rate->sta_min_mcs)
+ cl_dyn_bcast_rate_update(cl_hw, mcs);
+}
+
+void cl_dyn_bcast_rate_update_upon_disassoc(struct cl_hw *cl_hw, u8 mcs, u8 num_sta)
+{
+ struct cl_dyn_bcast_rate *dyn_bcast_rate = &cl_hw->dyn_bcast_rate;
+ struct cl_sta *cl_sta = NULL;
+ u8 min_mcs = WRS_MCS_MAX - 1;
+
+ if (!cl_hw->conf->ce_dyn_bcast_rate_en)
+ return;
+
+ /* When the last station disconnects - set bcast back to 0 */
+ if (num_sta == 0) {
+ cl_dyn_bcast_rate_update(cl_hw, 0);
+ return;
+ }
+
+ /* If this station did not have the minimum rate there is nothing to do */
+ if (mcs > dyn_bcast_rate->sta_min_mcs)
+ return;
+
+ /*
+ * Find new minimum MCS of all station (the disassociating
+ * station is not in list at this stage)
+ */
+ cl_sta_lock_bh(cl_hw);
+
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.head, list) {
+ if (cl_sta->wrs_sta.tx_su_params.rate_params.mcs < min_mcs) {
+ min_mcs = cl_sta->wrs_sta.tx_su_params.rate_params.mcs;
+
+ if (min_mcs == 0)
+ break;
+ }
+ }
+
+ cl_sta_unlock_bh(cl_hw);
+
+ cl_dyn_bcast_rate_update(cl_hw, min_mcs);
+}
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+
+static void _cl_dyn_mcast_rate_send(struct cl_hw *cl_hw, u8 wrs_mode_new)
+{
+ struct cl_dyn_mcast_rate *dyn_mcast_rate = &cl_hw->dyn_mcast_rate;
+
+ if (dyn_mcast_rate->wrs_mode_curr == wrs_mode_new)
+ return;
+
+ if (!cl_rate_ctrl_set_mcast(cl_hw, wrs_mode_new, cl_hw->conf->ce_mcast_rate))
+ return;
+
+ dyn_mcast_rate->wrs_mode_curr = wrs_mode_new;
+ cl_dbg_trace(cl_hw, "New multicast mode = %u\n", wrs_mode_new);
+}
+
+static struct cl_dyn_mcast_rate cl_dyn_mcast_rate_prepare(struct cl_hw *cl_hw)
+{
+ struct cl_dyn_mcast_rate dyn_mcast_rate;
+
+ memset(&dyn_mcast_rate, 0, sizeof(struct cl_dyn_mcast_rate));
+ if (cl_hw->conf->ci_min_he_en &&
+ cl_hw->wireless_mode == WIRELESS_MODE_HE)
+ dyn_mcast_rate.wrs_mode_default = WRS_MODE_HE;
+ else if (cl_band_is_24g(cl_hw) && cl_hw_mode_is_b_or_bg(cl_hw))
+ dyn_mcast_rate.wrs_mode_default = WRS_MODE_CCK;
+ else
+ dyn_mcast_rate.wrs_mode_default = WRS_MODE_OFDM;
+
+ return dyn_mcast_rate;
+}
+
+void cl_dyn_mcast_rate_init(struct cl_hw *cl_hw)
+{
+ struct cl_dyn_mcast_rate dyn_mcast_rate;
+
+ dyn_mcast_rate = cl_dyn_mcast_rate_prepare(cl_hw);
+ cl_hw->dyn_mcast_rate.wrs_mode_default = dyn_mcast_rate.wrs_mode_default;
+
+ cl_dbg_trace(cl_hw, "mode = %u, mcs = %u\n",
+ dyn_mcast_rate.wrs_mode_default, cl_hw->conf->ce_mcast_rate);
+}
+
+void cl_dyn_mcast_rate_set(struct cl_hw *cl_hw)
+{
+ /*
+ * Set wrs_mode_curr to 0xff so that the message will be sent to
+ * firmware when this function is called from cl_ops_start()
+ */
+ struct cl_dyn_mcast_rate *dyn_mcast_rate = &cl_hw->dyn_mcast_rate;
+
+ dyn_mcast_rate->wrs_mode_curr = U8_MAX;
+
+ _cl_dyn_mcast_rate_send(cl_hw, dyn_mcast_rate->wrs_mode_default);
+}
+
+void cl_dyn_mcast_update(struct cl_hw *cl_hw)
+{
+ struct cl_dyn_mcast_rate dyn_mcast_rate;
+
+ dyn_mcast_rate = cl_dyn_mcast_rate_prepare(cl_hw);
+ if (cl_hw->dyn_mcast_rate.wrs_mode_default == dyn_mcast_rate.wrs_mode_default)
+ return;
+
+ cl_hw->dyn_mcast_rate.wrs_mode_default = dyn_mcast_rate.wrs_mode_default;
+ cl_dyn_mcast_rate_set(cl_hw);
+}
+
+void cl_dyn_mcast_rate_recovery(struct cl_hw *cl_hw)
+{
+ /*
+ * cl_dyn_mcast_rate_recovery() is called during recovery process().
+ * Reset wrs_mode_curr so that message will be sent.
+ */
+ struct cl_dyn_mcast_rate *dyn_mcast_rate = &cl_hw->dyn_mcast_rate;
+ u8 wrs_mode_curr = dyn_mcast_rate->wrs_mode_curr;
+
+ dyn_mcast_rate->wrs_mode_curr = U8_MAX;
+
+ _cl_dyn_mcast_rate_send(cl_hw, wrs_mode_curr);
+}
+
+void cl_dyn_mcast_rate_update_upon_assoc(struct cl_hw *cl_hw, u8 wrs_mode, u8 num_sta)
+{
+ struct cl_dyn_mcast_rate *dyn_mcast_rate = &cl_hw->dyn_mcast_rate;
+
+ if (!cl_hw->conf->ce_dyn_mcast_rate_en)
+ return;
+
+ /*
+ * If the wrs_mode of the new station is lower than the current multicast
+ * wrs_mode, or if this is the first station to connect - update multicast mode
+ */
+ if (wrs_mode < dyn_mcast_rate->wrs_mode_curr || num_sta == 1)
+ _cl_dyn_mcast_rate_send(cl_hw, wrs_mode);
+}
+
+void cl_dyn_mcast_rate_update_upon_disassoc(struct cl_hw *cl_hw, u8 wrs_mode, u8 num_sta)
+{
+ struct cl_dyn_mcast_rate *dyn_mcast_rate = &cl_hw->dyn_mcast_rate;
+ struct cl_sta *cl_sta = NULL;
+ u8 wrs_mode_min = WRS_MODE_HE;
+
+ if (!cl_hw->conf->ce_dyn_mcast_rate_en)
+ return;
+
+ /* When the last station disconnects - set default mcast rate */
+ if (num_sta == 0) {
+ _cl_dyn_mcast_rate_send(cl_hw, dyn_mcast_rate->wrs_mode_default);
+ return;
+ }
+
+ /*
+ * If wrs_mode of the disassociating station is bigger
+ * than the current mode then there is nothing to update.
+ */
+ if (wrs_mode > dyn_mcast_rate->wrs_mode_curr)
+ return;
+
+ /*
+ * Find minimal wrs_mode among the connected stations (the
+ * disassociating station is not in list at this stage).
+ */
+ cl_sta_lock_bh(cl_hw);
+
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.head, list)
+ if (cl_sta->wrs_sta.mode < wrs_mode_min)
+ wrs_mode_min = cl_sta->wrs_sta.mode;
+
+ cl_sta_unlock_bh(cl_hw);
+
+ _cl_dyn_mcast_rate_send(cl_hw, wrs_mode_min);
+}
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
--
2.36.1


2022-05-24 15:16:39

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 81/96] cl8k: add temperature.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
.../net/wireless/celeno/cl8k/temperature.h | 71 +++++++++++++++++++
1 file changed, 71 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/temperature.h

diff --git a/drivers/net/wireless/celeno/cl8k/temperature.h b/drivers/net/wireless/celeno/cl8k/temperature.h
new file mode 100644
index 000000000000..e5ab770199e8
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/temperature.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_TEMPERATURE_H
+#define CL_TEMPERATURE_H
+
+#include <net/mac80211.h>
+
+#define CL_TEMP_PROTECT_INTERVAL_MS 40000
+#define CL_TEMP_PROTECT_NUM_SAMPLES 4
+#define CL_TEMP_PROTECT_RADIO_OFF_HYST 10
+#define CL_TEMP_COMP_ITERATIONS 4
+#define CL_TEMPERATURE_TIMER_INTERVAL_MS 4000
+#define CL_TEMPERATURE_UPDATE_INTERVAL_MS (CL_TEMPERATURE_TIMER_INTERVAL_MS - 100)
+
+enum cl_temp_state {
+ TEMP_PROTECT_OFF,
+ TEMP_PROTECT_INTERNAL,
+ TEMP_PROTECT_EXTERNAL,
+ TEMP_PROTECT_DIFF
+};
+
+enum cl_temp_mode {
+ TEMP_MODE_INTERNAL,
+ TEMP_MODE_EXTERNAL
+};
+
+struct cl_temp_comp_db {
+ s8 calib_temperature;
+ s8 power_offset;
+ s32 acc_temp_delta;
+ s32 avg_temp_delta;
+};
+
+struct cl_temp_protect_db {
+ bool force_radio_off;
+ u8 duty_cycle;
+ u8 test_mode_duty_cycle;
+ u8 curr_idx;
+ s16 last_samples[CL_TEMP_PROTECT_NUM_SAMPLES];
+ unsigned long last_timestamp;
+};
+
+struct cl_temperature {
+ s8 diff_internal_external;
+ u8 comp_iterations;
+ struct cl_temp_protect_db protect_db;
+ struct task_struct *kthread;
+ wait_queue_head_t wait_done;
+ wait_queue_head_t measurement_done;
+ s16 internal_last;
+ s16 external_last;
+ unsigned long internal_read_timestamp;
+ unsigned long external_read_timestamp;
+ struct mutex mutex;
+ struct mutex hw_lock;
+ unsigned long used_hw_map;
+};
+
+struct cl_chip;
+
+void cl_temperature_init(struct cl_chip *chip);
+void cl_temperature_close(struct cl_chip *chip);
+s8 cl_temperature_read(struct cl_hw *cl_hw, enum cl_temp_mode mode);
+void cl_temperature_recovery(struct cl_hw *cl_hw);
+int cl_temperature_diff_e2p_read(struct cl_hw *cl_hw);
+s16 cl_temperature_calib_calc(struct cl_hw *cl_hw, u16 raw_bits);
+void cl_temperature_comp_update_calib(struct cl_hw *cl_hw);
+void cl_temperature_wait_for_measurement(struct cl_chip *chip, u8 tcv_idx);
+
+#endif /* CL_TEMPERATURE_H */
--
2.36.1


2022-05-24 15:21:43

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 08/96] cl8k: add bf.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/bf.h | 52 +++++++++++++++++++++++++++
1 file changed, 52 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/bf.h

diff --git a/drivers/net/wireless/celeno/cl8k/bf.h b/drivers/net/wireless/celeno/cl8k/bf.h
new file mode 100644
index 000000000000..efe433f55f7f
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/bf.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_BF_H
+#define CL_BF_H
+
+#include "debug.h"
+#include "sounding.h"
+
+/*
+ * BF (=BeamForming, 802.11)
+ */
+
+struct cl_bf_db {
+ bool enable;
+ bool force;
+ enum cl_dbg_level dbg_level;
+};
+
+struct cl_bf_sta_db {
+ bool traffic_active;
+ bool sounding_start;
+ bool sounding_remove_required;
+ bool indication_timeout;
+ bool synced;
+ bool is_on;
+ bool is_on_fallback;
+ u8 num_ss;
+ u8 num_ss_fallback;
+ u8 beamformee_sts;
+ u8 nc;
+ u32 sounding_indications;
+ struct timer_list timer;
+};
+
+void cl_bf_init(struct cl_hw *cl_hw);
+void cl_bf_update_rate(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_bf_sta_add(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct ieee80211_sta *sta);
+void cl_bf_sta_remove(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_bf_sta_active(struct cl_hw *cl_hw, struct cl_sta *cl_sta, bool active);
+void cl_bf_reset_sounding_ind(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+bool cl_bf_is_enabled(struct cl_hw *cl_hw);
+bool cl_bf_is_on(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u8 nss);
+void cl_bf_enable(struct cl_hw *cl_hw, bool enable, bool trigger_decision);
+void cl_bf_sounding_start(struct cl_hw *cl_hw, enum sounding_type type, struct cl_sta **cl_sta_arr,
+ u8 sta_num, struct cl_sounding_info *recovery_elem);
+void cl_bf_sounding_stop(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_bf_sounding_decision(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_bf_sounding_req_success(struct cl_hw *cl_hw, struct cl_sounding_info *new_elem);
+void cl_bf_sounding_req_failure(struct cl_hw *cl_hw, struct cl_sounding_info *new_elem);
+
+#endif /* CL_BF_H */
--
2.36.1


2022-05-24 15:22:36

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 69/96] cl8k: add rx.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/rx.h | 505 ++++++++++++++++++++++++++
1 file changed, 505 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/rx.h

diff --git a/drivers/net/wireless/celeno/cl8k/rx.h b/drivers/net/wireless/celeno/cl8k/rx.h
new file mode 100644
index 000000000000..f460c3c37475
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/rx.h
@@ -0,0 +1,505 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_RX_H
+#define CL_RX_H
+
+#include <linux/skbuff.h>
+#include <net/mac80211.h>
+
+#include "ipc_shared.h"
+
+/* Decryption status subfields */
+enum cl_rx_hdr_decr {
+ CL_RX_HDR_DECR_UNENC,
+ CL_RX_HDR_DECR_ICVFAIL,
+ CL_RX_HDR_DECR_CCMPFAIL,
+ CL_RX_HDR_DECR_AMSDUDISCARD,
+ CL_RX_HDR_DECR_NULLKEY,
+ CL_RX_HDR_DECR_WEPSUCCESS,
+ CL_RX_HDR_DECR_TKIPSUCCESS,
+ CL_RX_HDR_DECR_CCMPSUCCESS
+};
+
+#define CL_PADDING_IN_BYTES 2
+
+struct hw_rxhdr {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 msdu_cnt : 8, /* [7:0] */
+ corrupted_amsdu : 1, /* [8] */
+ reserved : 1, /* [9] */
+ msdu_dma_align : 2, /* [11:10] */
+ amsdu_error_code : 4, /* [15:12] */
+ reserved_2 :16; /* [31:16] */
+#else
+ u32 reserved_2 :16, /* [15:0] */
+ amsdu_error_code : 4, /* [19:16] */
+ msdu_dma_align : 2, /* [21:20] */
+ reserved : 1, /* [22] */
+ corrupted_amsdu : 1, /* [23] */
+ msdu_cnt : 8; /* [31:24] */
+#endif
+ /* Total length for the MPDU transfer */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 len :14, /* [13:0] */
+ ampdu_cnt : 2, /* [15:14] */
+ rx_padding_done : 1, /* [16] */
+ rx_class_rule_res : 7, /* [23:17] */
+ /* AMPDU Status Information */
+ mpdu_cnt : 8; /* [31:24] */
+#else
+ u32 mpdu_cnt : 8, /* [7:0] */
+ rx_class_rule_res : 7, /* [14:8] */
+ rx_padding_done : 1, /* [15] */
+ ampdu_cnt : 2, /* [17:16] */
+ len :14; /* [31:18] */
+#endif
+
+ /* TSF Low */
+ __le32 tsf_lo;
+ /* TSF High */
+ __le32 tsf_hi;
+
+ /* Receive Vector 1a */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 leg_length :12, /* [11:0] */
+ leg_rate : 4, /* [15:12] */
+ ht_length_l :16; /* [31:16] */
+#else
+ u32 ht_length_l :16, /* [15:0] */
+ leg_rate :4, /* [19:16] */
+ leg_length :12; /* [31:20] */
+#endif
+
+ /* Receive Vector 1b */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 ht_length_h : 8, /* [7:0] */
+ mcs : 7, /* [14:8] */
+ pre_type : 1, /* [15] */
+ format_mod : 4, /* [19:16] */
+ reserved_1b : 1, /* [20] */
+ n_sts : 3, /* [23:21] */
+ lsig_valid : 1, /* [24] */
+ sounding : 1, /* [25] */
+ num_extn_ss : 2, /* [27:26] */
+ aggregation : 1, /* [28] */
+ fec_coding : 1, /* [29] */
+ dyn_bw : 1, /* [30] */
+ doze_not_allowed : 1; /* [31] */
+#else
+ u32 doze_not_allowed : 1, /* [0] */
+ dyn_bw : 1, /* [1] */
+ fec_coding : 1, /* [2] */
+ aggregation : 1, /* [3] */
+ num_extn_ss : 2, /* [5:4] */
+ sounding : 1, /* [6] */
+ lsig_valid : 1, /* [7] */
+ n_sts : 3, /* [10:8] */
+ reserved_1b : 1, /* [11] */
+ format_mod : 4, /* [15:12] */
+ pre_type : 1, /* [16] */
+ mcs : 7, /* [23:17] */
+ ht_length_h : 8; /* [31:24] */
+#endif
+
+ /* Receive Vector 1c */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 sn : 4, /* [3:0] */
+ warn_1c : 1, /* [4] */
+ stbc : 2, /* [6:5] */
+ smoothing : 1, /* [7] */
+ partial_aid : 9, /* [16:8] */
+ group_id : 6, /* [22:17] */
+ reserved_1c : 1, /* [23] */
+ rssi1 : 8; /* [31:24] */
+#else
+ u32 rssi1 : 8, /* [7:0] */
+ reserved_1c : 1, /* [8] */
+ group_id : 6, /* [14:9] */
+ partial_aid : 9, /* [23:15] */
+ smoothing : 1, /* [24] */
+ stbc : 2, /* [26:25] */
+ warn_1c : 1, /* [27] */
+ sn : 4; /* [31:28] */
+#endif
+
+ /* Receive Vector 1d */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ s32 rssi2 : 8, /* [7:0] */
+ rssi3 : 8, /* [15:8] */
+ rssi4 : 8, /* [23:16] */
+ rx_chains : 8; /* [31:24] */
+#else
+ s32 rx_chains : 8, /* [7:0] */
+ rssi4 : 8, /* [15:8] */
+ rssi3 : 8, /* [23:16] */
+ rssi2 : 8; /* [31:24] */
+#endif
+
+ /* Receive Vector 1e */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 txop_duration : 7, /* [6:0] */
+ beam_change : 1, /* [7] */
+ mcs_sig_b : 3, /* [10:8] */
+ dcm : 1, /* [11] */
+ dcm_sig_b : 1, /* [12] */
+ beamformed : 1, /* [13] */
+ ltf_type : 2, /* [15:14] */
+ ru_band : 1, /* [16] */
+ ru_type : 3, /* [19:17] */
+ ru_index : 6, /* [25:20] */
+ pe_duration : 3, /* [28:26] */
+ ch_bw : 2, /* [30:29] */
+ reserved_1e : 1; /* [31] */
+#else
+ u32 reserved_1e : 1, /* [0] */
+ ch_bw : 2, /* [2:1] */
+ pe_duration : 3, /* [5:3] */
+ ru_index : 6, /* [11:6] */
+ ru_type : 3, /* [14:12] */
+ ru_band : 1, /* [15] */
+ ltf_type : 2, /* [17:16] */
+ beamformed : 1, /* [18] */
+ dcm_sig_b : 1, /* [19] */
+ dcm : 1, /* [20] */
+ mcs_sig_b : 3, /* [23:21] */
+ beam_change : 1, /* [24] */
+ txop_duration : 7; /* [31:25] */
+#endif
+
+ /* Receive Vector 1f */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 spatial_reuse : 16, /* [15:0] */
+ service : 16; /* [31:16] */
+#else
+ u32 service : 16, /* [15:0] */
+ spatial_reuse : 16; /* [31:16] */
+#endif
+
+ /* Receive Vector 1g */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 bss_color : 6, /* [5:0] */
+ gi_type : 2, /* [7:6] */
+ antenna_set : 16, /* [23:8] */
+ rssi5 : 8; /* [31:24] */
+#else
+ u32 rssi5 : 8, /* [7:0] */
+ antenna_set : 16, /* [23:8] */
+ gi_type : 2, /* [25:24] */
+ bss_color : 6; /* [31:26] */
+#endif
+
+ /* Receive Vector 1h */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ s32 rssi6 : 8, /* [7:0] */
+ rssi7 : 8, /* [15:8] */
+ rssi8 : 8, /* [23:16] */
+ future_1 : 8; /* [31:24] */
+#else
+ s32 future_1 : 8, /* [7:0] */
+ rssi8 : 8, /* [15:8] */
+ rssi7 : 8, /* [23:16] */
+ rssi6 : 8; /* [31:24] */
+#endif
+
+ /* Receive Vector 2a */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 rcpi : 8, /* [7:0] */
+ evm1 : 8, /* [15:8] */
+ evm2 : 8, /* [23:16] */
+ evm3 : 8; /* [31:24] */
+#else
+ u32 evm3 : 8, /* [7:0] */
+ evm2 : 8, /* [15:8] */
+ evm1 : 8, /* [23:16] */
+ rcpi : 8; /* [31:24] */
+#endif
+
+ /* Receive Vector 2b */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 evm4 : 8, /* [7:0] */
+ warn_2b : 1, /* [8] */
+ reserved2b_1 : 7, /* [15:9] */
+ reserved2b_2 : 8, /* [23:16] */
+ reserved2b_3 : 8; /* [31:24] */
+#else
+ u32 reserved2b_3 : 8, /* [7:0] */
+ reserved2b_2 : 8, /* [15:8] */
+ reserved2b_1 : 7, /* [22:16] */
+ warn_2b : 1, /* [23] */
+ evm4 : 8; /* [31:24] */
+#endif
+
+ /* Status **/
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 rx_vect2_valid : 1, /* [0] */
+ resp_frame : 1, /* [1] */
+ decr_status : 3, /* [4:2] */
+ rx_fifo_oflow : 1, /* [5] */
+ undef_err : 1, /* [6] */
+ phy_err : 1, /* [7] */
+ fcs_err : 1, /* [8] */
+ addr_mismatch : 1, /* [9] */
+ ga_frame : 1, /* [10] */
+ first_qos_data : 1, /* [11] */
+ amsdu_present : 1, /* [12] */
+ frm_successful_rx : 1, /* [13] */
+ desc_done_rx : 1, /* [14] */
+ desc_spare : 1, /* [15] */
+ key_sram_index : 9, /* [24:16] */
+ key_sram_v : 1, /* [25] */
+ type : 2, /* [27:26] */
+ subtype : 4; /* [31:28] */
+#else
+ u32 subtype : 4, /* [3:0] */
+ type : 2, /* [5:4] */
+ key_sram_v : 1, /* [6] */
+ key_sram_index : 9, /* [15:7] */
+ desc_spare : 1, /* [16] */
+ desc_done_rx : 1, /* [17] */
+ frm_successful_rx : 1, /* [18] */
+ amsdu_present : 1, /* [19] */
+ first_qos_data : 1, /* [20] */
+ ga_frame : 1, /* [21] */
+ addr_mismatch : 1, /* [22] */
+ fcs_err : 1, /* [23] */
+ phy_err : 1, /* [24] */
+ undef_err : 1, /* [25] */
+ rx_fifo_oflow : 1, /* [26] */
+ decr_status : 3, /* [29:27] */
+ resp_frame : 1, /* [30] */
+ rx_vect2_valid : 1; /* [31] */
+#endif
+
+ /* PHY channel information 1 */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 phy_band : 8, /* [7:0] */
+ phy_channel_type : 8, /* [15:8] */
+ phy_prim20_freq : 16; /* [31:16] */
+#else
+ u32 phy_prim20_freq : 16, /* [15:0] */
+ phy_channel_type : 8, /* [23:16] */
+ phy_band : 8; /* [31:24] */
+#endif
+
+ /* PHY channel information 2 */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 phy_center1_freq : 16, /* [15:0] */
+ phy_center2_freq : 16; /* [31:16] */
+#else
+ u32 phy_center2_freq : 16, /* [15:0] */
+ phy_center1_freq : 16; /* [31:16] */
+#endif
+
+ /* Patten **/
+ __le32 pattern;
+};
+
+enum cl_radio_stats {
+ CL_RADIO_FCS_ERROR = 0,
+ CL_RADIO_PHY_ERROR,
+ CL_RADIO_RX_FIFO_OVERFLOW,
+ CL_RADIO_ADDRESS_MISMATCH,
+ CL_RADIO_UNDEFINED_ERROR,
+ CL_RADIO_ERRORS_MAX
+};
+
+struct cl_rx_path_info {
+ u32 rx_desc[CL_RX_BUF_MAX];
+ u32 netif_rx;
+ u32 elem_alloc_fail;
+ u32 skb_null;
+ u32 pkt_drop_amsdu_corrupted;
+ u32 pkt_drop_sub_amsdu_corrupted;
+ u32 pkt_drop_amsdu_len_error;
+ u32 pkt_drop_sub_amsdu_len_error;
+ u32 pkt_drop_wrong_pattern;
+ u32 pkt_drop_not_success;
+ u32 pkt_drop_sub_amsdu_not_success;
+ u32 pkt_drop_unencrypted;
+ u32 pkt_drop_sub_amsdu_unencrypted;
+ u32 pkt_drop_decrypt_fail;
+ u32 pkt_drop_sub_amsdu_decrypt_fail;
+ u32 pkt_drop_tailroom_error;
+ u32 pkt_drop_sub_amsdu_tailroom_error;
+ u32 pkt_drop_amsdu_inj_attack;
+ u32 pkt_drop_sta_null;
+ u32 pkt_drop_host_limit;
+ u32 pkt_drop_invalid_pn;
+ u32 remote_cpu[CPU_MAX_NUM];
+ u32 exceed_pkt_budget;
+ u32 pkt_handle_bucket_rxm[IPC_RXBUF_NUM_BUCKETS_RXM];
+ u32 pkt_handle_bucket_fw[IPC_RXBUF_NUM_BUCKETS_FW];
+ u32 amsdu_cnt[RX_MAX_MSDU_IN_AMSDU];
+ u32 non_amsdu;
+ u32 buffer_process_irq;
+ u32 buffer_process_tasklet;
+};
+
+struct cl_hw;
+struct cl_vif;
+struct cl_sta;
+struct mm_agg_rx_ind;
+
+void cl_rx_init(struct cl_hw *cl_hw);
+void cl_rx_off(struct cl_hw *cl_hw);
+void cl_rx_remote_tasklet_sched(void *t);
+void cl_rx_remote_cpu_info(struct cl_hw *cl_hw);
+void cl_rx_push_queue(struct cl_hw *cl_hw, struct sk_buff *skb);
+void cl_rx_skb_alloc_handler(struct sk_buff *skb);
+void cl_rx_skb_error(struct cl_hw *cl_hw);
+void cl_rx_skb_drop(struct cl_hw *cl_hw, struct sk_buff *skb, u8 cnt);
+void cl_rx_post_recovery(struct cl_hw *cl_hw);
+void cl_rx_finish(struct cl_hw *cl_hw, struct sk_buff *skb, struct ieee80211_sta *sta);
+u8 cl_rx_get_skb_ac(struct ieee80211_hdr *hdr);
+bool cl_rx_process_in_irq(struct cl_hw *cl_hw);
+void cl_agg_rx_report_handler(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u8 sta_loc,
+ struct mm_agg_rx_ind *agg_report);
+
+enum rx_amsdu_error {
+ RX_AMSDU_ERR_CORRUPTED = 0x1,
+ RX_AMSDU_ERR_LENGTH = 0x2,
+ RX_AMSDU_ERR_NOT_SUCCESS = 0x4,
+ RX_AMSDU_ERR_UNENCRYPTED = 0x8,
+ RX_AMSDU_ERR_DECRYPT_FAIL = 0x10,
+ RX_AMSDU_ERR_INVALID_TAILROOM = 0x20,
+};
+
+struct cl_amsdu_rx_state {
+ u8 msdu_cnt;
+ u8 msdu_remaining_cnt;
+ /*
+ * MSDU padding - all MSDU pkt within A-MSDU are 4byte aligned (this
+ * value holds the alignment value)
+ * According to ieee80211 spec all MSDU share the same alignment
+ */
+ u8 msdu_dma_align;
+ u8 amsdu_error;
+ u8 encrypt_len;
+ u8 sta_idx;
+ u8 tid;
+ u32 packet_len;
+ struct hw_rxhdr *rxhdr;
+ struct sk_buff *first_skb;
+ struct sk_buff_head frames;
+};
+
+void cl_rx_amsdu_first(struct cl_hw *cl_hw, struct sk_buff *skb, struct hw_rxhdr *rxhdr,
+ u8 sta_idx, u8 tid, u8 encrypt_len);
+bool cl_rx_amsdu_sub(struct cl_hw *cl_hw, struct sk_buff *skb);
+bool cl_rx_amsdu_check_aggregation_attack(struct cl_amsdu_rx_state *amsdu_rx_state);
+void cl_rx_amsdu_first_corrupted(struct cl_hw *cl_hw, struct sk_buff *skb,
+ struct hw_rxhdr *rxhdr);
+void cl_rx_amsdu_sub_error(struct cl_hw *cl_hw, struct sk_buff *skb);
+void cl_rx_amsdu_set_state_error(struct cl_hw *cl_hw,
+ struct hw_rxhdr *rxhdr,
+ enum rx_amsdu_error err);
+void cl_rx_amsdu_reset(struct cl_hw *cl_hw);
+void cl_rx_amsdu_stats(struct cl_hw *cl_hw, u8 msdu_cnt);
+void cl_rx_amsdu_hw_en(struct ieee80211_hw *hw, bool rxamsdu_en);
+
+/* Field definitions */
+#define RX_CNTRL_EN_DUPLICATE_DETECTION_BIT ((u32)0x80000000)
+#define RX_CNTRL_EN_DUPLICATE_DETECTION_POS 31
+#define RX_CNTRL_ACCEPT_UNKNOWN_BIT ((u32)0x40000000)
+#define RX_CNTRL_ACCEPT_UNKNOWN_POS 30
+#define RX_CNTRL_ACCEPT_OTHER_DATA_FRAMES_BIT ((u32)0x20000000)
+#define RX_CNTRL_ACCEPT_OTHER_DATA_FRAMES_POS 29
+#define RX_CNTRL_ACCEPT_QO_S_NULL_BIT ((u32)0x10000000)
+#define RX_CNTRL_ACCEPT_QO_S_NULL_POS 28
+#define RX_CNTRL_ACCEPT_QCFWO_DATA_BIT ((u32)0x08000000)
+#define RX_CNTRL_ACCEPT_QCFWO_DATA_POS 27
+#define RX_CNTRL_ACCEPT_Q_DATA_BIT ((u32)0x04000000)
+#define RX_CNTRL_ACCEPT_Q_DATA_POS 26
+#define RX_CNTRL_ACCEPT_CFWO_DATA_BIT ((u32)0x02000000)
+#define RX_CNTRL_ACCEPT_CFWO_DATA_POS 25
+#define RX_CNTRL_ACCEPT_DATA_BIT ((u32)0x01000000)
+#define RX_CNTRL_ACCEPT_DATA_POS 24
+#define RX_CNTRL_ACCEPT_OTHER_CNTRL_FRAMES_BIT ((u32)0x00800000)
+#define RX_CNTRL_ACCEPT_OTHER_CNTRL_FRAMES_POS 23
+#define RX_CNTRL_ACCEPT_CF_END_BIT ((u32)0x00400000)
+#define RX_CNTRL_ACCEPT_CF_END_POS 22
+#define RX_CNTRL_ACCEPT_ACK_BIT ((u32)0x00200000)
+#define RX_CNTRL_ACCEPT_ACK_POS 21
+#define RX_CNTRL_ACCEPT_CTS_BIT ((u32)0x00100000)
+#define RX_CNTRL_ACCEPT_CTS_POS 20
+#define RX_CNTRL_ACCEPT_RTS_BIT ((u32)0x00080000)
+#define RX_CNTRL_ACCEPT_RTS_POS 19
+#define RX_CNTRL_ACCEPT_PS_POLL_BIT ((u32)0x00040000)
+#define RX_CNTRL_ACCEPT_PS_POLL_POS 18
+#define RX_CNTRL_ACCEPT_BA_BIT ((u32)0x00020000)
+#define RX_CNTRL_ACCEPT_BA_POS 17
+#define RX_CNTRL_ACCEPT_BAR_BIT ((u32)0x00010000)
+#define RX_CNTRL_ACCEPT_BAR_POS 16
+#define RX_CNTRL_ACCEPT_OTHER_MGMT_FRAMES_BIT ((u32)0x00008000)
+#define RX_CNTRL_ACCEPT_OTHER_MGMT_FRAMES_POS 15
+#define RX_CNTRL_ACCEPT_ALL_BEACON_BIT ((u32)0x00002000)
+#define RX_CNTRL_ACCEPT_ALL_BEACON_POS 13
+#define RX_CNTRL_ACCEPT_NOT_EXPECTED_BA_BIT ((u32)0x00001000)
+#define RX_CNTRL_ACCEPT_NOT_EXPECTED_BA_POS 12
+#define RX_CNTRL_ACCEPT_DECRYPT_ERROR_FRAMES_BIT ((u32)0x00000800)
+#define RX_CNTRL_ACCEPT_DECRYPT_ERROR_FRAMES_POS 11
+#define RX_CNTRL_ACCEPT_BEACON_BIT ((u32)0x00000400)
+#define RX_CNTRL_ACCEPT_BEACON_POS 10
+#define RX_CNTRL_ACCEPT_PROBE_RESP_BIT ((u32)0x00000200)
+#define RX_CNTRL_ACCEPT_PROBE_RESP_POS 9
+#define RX_CNTRL_ACCEPT_PROBE_REQ_BIT ((u32)0x00000100)
+#define RX_CNTRL_ACCEPT_PROBE_REQ_POS 8
+#define RX_CNTRL_ACCEPT_MY_UNICAST_BIT ((u32)0x00000080)
+#define RX_CNTRL_ACCEPT_MY_UNICAST_POS 7
+#define RX_CNTRL_ACCEPT_UNICAST_BIT ((u32)0x00000040)
+#define RX_CNTRL_ACCEPT_UNICAST_POS 6
+#define RX_CNTRL_ACCEPT_ERROR_FRAMES_BIT ((u32)0x00000020)
+#define RX_CNTRL_ACCEPT_ERROR_FRAMES_POS 5
+#define RX_CNTRL_ACCEPT_OTHER_BSSID_BIT ((u32)0x00000010)
+#define RX_CNTRL_ACCEPT_OTHER_BSSID_POS 4
+#define RX_CNTRL_ACCEPT_BROADCAST_BIT ((u32)0x00000008)
+#define RX_CNTRL_ACCEPT_BROADCAST_POS 3
+#define RX_CNTRL_ACCEPT_MULTICAST_BIT ((u32)0x00000004)
+#define RX_CNTRL_ACCEPT_MULTICAST_POS 2
+#define RX_CNTRL_DONT_DECRYPT_BIT ((u32)0x00000002)
+#define RX_CNTRL_DONT_DECRYPT_POS 1
+#define RX_CNTRL_EXC_UNENCRYPTED_BIT ((u32)0x00000001)
+#define RX_CNTRL_EXC_UNENCRYPTED_POS 0
+
+/* Default MAC Rx filters that cannot be changed by mac80211 */
+#define CL_MAC80211_NOT_CHANGEABLE ( \
+ RX_CNTRL_ACCEPT_QO_S_NULL_BIT | \
+ RX_CNTRL_ACCEPT_Q_DATA_BIT | \
+ RX_CNTRL_ACCEPT_DATA_BIT | \
+ RX_CNTRL_ACCEPT_OTHER_MGMT_FRAMES_BIT | \
+ RX_CNTRL_ACCEPT_MY_UNICAST_BIT | \
+ RX_CNTRL_ACCEPT_BROADCAST_BIT | \
+ RX_CNTRL_ACCEPT_BEACON_BIT | \
+ RX_CNTRL_ACCEPT_PROBE_RESP_BIT \
+ )
+
+u32 cl_rx_filter_update_flags(struct cl_hw *cl_hw, u32 filter);
+void cl_rx_filter_restore_flags(struct cl_hw *cl_hw);
+void cl_rx_filter_set_promiscuous(struct cl_hw *cl_hw);
+
+struct cl_tid_ampdu_rx {
+ spinlock_t reorder_lock;
+ u64 reorder_buf_filtered;
+ struct sk_buff_head *reorder_buf;
+ unsigned long *reorder_time;
+ struct ieee80211_sta *sta;
+ struct timer_list reorder_timer;
+ struct cl_hw *cl_hw;
+ u16 head_seq_num;
+ u16 stored_mpdu_num;
+ u16 ssn;
+ u16 buf_size;
+ u16 timeout;
+ u8 tid;
+ u8 auto_seq:1,
+ removed:1,
+ started:1;
+};
+
+void cl_rx_reorder_ampdu(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, struct sk_buff_head *ordered_mpdu);
+void cl_rx_reorder_close(struct cl_sta *cl_sta, u8 tid);
+void cl_rx_reorder_init(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u8 tid, u16 buf_size);
+
+#endif /* CL_RX_H */
--
2.36.1


2022-05-24 15:23:22

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 44/96] cl8k: add maintenance.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
.../net/wireless/celeno/cl8k/maintenance.c | 81 +++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/maintenance.c

diff --git a/drivers/net/wireless/celeno/cl8k/maintenance.c b/drivers/net/wireless/celeno/cl8k/maintenance.c
new file mode 100644
index 000000000000..3230757edacc
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/maintenance.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "maintenance.h"
+#include "traffic.h"
+#include "vns.h"
+#include "reg/reg_access.h"
+#include "sounding.h"
+#include "sta.h"
+#include "motion_sense.h"
+#include "radio.h"
+
+static void cl_maintenance_callback_slow(struct timer_list *t)
+{
+ struct cl_hw *cl_hw = from_timer(cl_hw, t, maintenance_slow_timer);
+
+ cl_cca_maintenance(cl_hw);
+ cl_noise_maintenance(cl_hw);
+
+ if (cl_hw_is_prod_or_listener(cl_hw))
+ goto reschedule_timer;
+
+ cl_vns_maintenance(cl_hw);
+
+ if (cl_hw->conf->ci_traffic_mon_en)
+ cl_sta_loop(cl_hw, cl_traffic_mon_sta_maintenance);
+
+ if (cl_sta_num(cl_hw) == 0)
+ goto reschedule_timer;
+
+ cl_motion_sense_maintenance(cl_hw);
+ cl_sounding_maintenance(cl_hw);
+
+reschedule_timer:
+ mod_timer(&cl_hw->maintenance_slow_timer,
+ jiffies + msecs_to_jiffies(CL_MAINTENANCE_PERIOD_SLOW_MS));
+}
+
+static void cl_maintenance_callback_fast(struct timer_list *t)
+{
+ struct cl_hw *cl_hw = from_timer(cl_hw, t, maintenance_fast_timer);
+
+ if (cl_sta_num(cl_hw) == 0)
+ goto reschedule_timer;
+
+ cl_traffic_maintenance(cl_hw);
+
+reschedule_timer:
+ mod_timer(&cl_hw->maintenance_fast_timer,
+ jiffies + msecs_to_jiffies(CL_MAINTENANCE_PERIOD_FAST_MS));
+}
+
+void cl_maintenance_init(struct cl_hw *cl_hw)
+{
+ timer_setup(&cl_hw->maintenance_slow_timer, cl_maintenance_callback_slow, 0);
+ timer_setup(&cl_hw->maintenance_fast_timer, cl_maintenance_callback_fast, 0);
+
+ cl_maintenance_start(cl_hw);
+}
+
+void cl_maintenance_close(struct cl_hw *cl_hw)
+{
+ del_timer_sync(&cl_hw->maintenance_slow_timer);
+ del_timer_sync(&cl_hw->maintenance_fast_timer);
+}
+
+void cl_maintenance_stop(struct cl_hw *cl_hw)
+{
+ del_timer(&cl_hw->maintenance_slow_timer);
+ del_timer(&cl_hw->maintenance_fast_timer);
+}
+
+void cl_maintenance_start(struct cl_hw *cl_hw)
+{
+ mod_timer(&cl_hw->maintenance_slow_timer,
+ jiffies + msecs_to_jiffies(CL_MAINTENANCE_PERIOD_SLOW_MS));
+
+ if (!cl_hw_is_prod_or_listener(cl_hw))
+ mod_timer(&cl_hw->maintenance_fast_timer,
+ jiffies + msecs_to_jiffies(CL_MAINTENANCE_PERIOD_FAST_MS));
+}
--
2.36.1


2022-05-24 15:34:11

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 25/96] cl8k: add e2p.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/e2p.h | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/e2p.h

diff --git a/drivers/net/wireless/celeno/cl8k/e2p.h b/drivers/net/wireless/celeno/cl8k/e2p.h
new file mode 100644
index 000000000000..3545e1d110f1
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/e2p.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_E2P_H
+#define CL_E2P_H
+
+#include <linux/types.h>
+
+#include "chip.h"
+#include "eeprom.h"
+
+enum {
+ E2P_MODE_BIN,
+ E2P_MODE_EEPROM,
+
+ E2P_MODE_MAX
+};
+
+int cl_e2p_init(struct cl_chip *chip);
+void cl_e2p_close(struct cl_chip *chip);
+int cl_e2p_write(struct cl_chip *chip, u8 *data, u16 size, u16 addr);
+int cl_e2p_read(struct cl_chip *chip, u8 *data, u16 size, u16 addr);
+void cl_e2p_read_eeprom_start_work(struct cl_chip *chip);
+
+#endif /* CL_E2P_H */
--
2.36.1


2022-05-24 15:35:53

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 10/96] cl8k: add calib.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/calib.h | 390 +++++++++++++++++++++++
1 file changed, 390 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/calib.h

diff --git a/drivers/net/wireless/celeno/cl8k/calib.h b/drivers/net/wireless/celeno/cl8k/calib.h
new file mode 100644
index 000000000000..6eb286392dd6
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/calib.h
@@ -0,0 +1,390 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_CALIB_H
+#define CL_CALIB_H
+
+#include <linux/workqueue.h>
+#include <net/cfg80211.h>
+
+#include "def.h"
+
+#define DCOC_LNA_GAIN_NUM 8
+#define MAX_SX 2
+#define IQ_NUM_TONES_REQ 8
+#define IQ_NUM_TONES_CFM (2 * IQ_NUM_TONES_REQ)
+#define SINGLETONS_MAX_NUM 1
+#define LOOPS_MAX_NUM (2 + SINGLETONS_MAX_NUM) /* 1: pre,2-11:singletone,12:post */
+#define SX_FREQ_OFFSET_Q2 5
+
+enum calib_cfm_id_type {
+ CALIB_CFM_DCOC,
+ CALIB_CFM_IQ,
+ CALIB_CFM_MAX
+};
+
+enum calib_channel_idx_24g {
+ CALIB_CHAN_24G_1,
+ CALIB_CHAN_24G_6,
+ CALIB_CHAN_24G_11,
+ CALIB_CHAN_24G_MAX,
+};
+
+enum calib_channel_idx_5g {
+ CALIB_CHAN_5G_36,
+ CALIB_CHAN_5G_40,
+ CALIB_CHAN_5G_44,
+ CALIB_CHAN_5G_48,
+ CALIB_CHAN_5G_52,
+ CALIB_CHAN_5G_56,
+ CALIB_CHAN_5G_60,
+ CALIB_CHAN_5G_64,
+ CALIB_CHAN_5G_100,
+ CALIB_CHAN_5G_104,
+ CALIB_CHAN_5G_108,
+ CALIB_CHAN_5G_112,
+ CALIB_CHAN_5G_116,
+ CALIB_CHAN_5G_120,
+ CALIB_CHAN_5G_124,
+ CALIB_CHAN_5G_128,
+ CALIB_CHAN_5G_132,
+ CALIB_CHAN_5G_136,
+ CALIB_CHAN_5G_140,
+ CALIB_CHAN_5G_144,
+ CALIB_CHAN_5G_149,
+ CALIB_CHAN_5G_153,
+ CALIB_CHAN_5G_157,
+ CALIB_CHAN_5G_161,
+ CALIB_CHAN_5G_165,
+ CALIB_CHAN_5G_MAX
+};
+
+enum calib_channel_idx_6g {
+ CALIB_CHAN_6G_1,
+ CALIB_CHAN_6G_5,
+ CALIB_CHAN_6G_9,
+ CALIB_CHAN_6G_13,
+ CALIB_CHAN_6G_17,
+ CALIB_CHAN_6G_21,
+ CALIB_CHAN_6G_25,
+ CALIB_CHAN_6G_29,
+ CALIB_CHAN_6G_33,
+ CALIB_CHAN_6G_37,
+ CALIB_CHAN_6G_41,
+ CALIB_CHAN_6G_45,
+ CALIB_CHAN_6G_49,
+ CALIB_CHAN_6G_53,
+ CALIB_CHAN_6G_57,
+ CALIB_CHAN_6G_61,
+ CALIB_CHAN_6G_65,
+ CALIB_CHAN_6G_69,
+ CALIB_CHAN_6G_73,
+ CALIB_CHAN_6G_77,
+ CALIB_CHAN_6G_81,
+ CALIB_CHAN_6G_85,
+ CALIB_CHAN_6G_89,
+ CALIB_CHAN_6G_93,
+ CALIB_CHAN_6G_97,
+ CALIB_CHAN_6G_101,
+ CALIB_CHAN_6G_105,
+ CALIB_CHAN_6G_109,
+ CALIB_CHAN_6G_113,
+ CALIB_CHAN_6G_117,
+ CALIB_CHAN_6G_121,
+ CALIB_CHAN_6G_125,
+ CALIB_CHAN_6G_129,
+ CALIB_CHAN_6G_133,
+ CALIB_CHAN_6G_137,
+ CALIB_CHAN_6G_141,
+ CALIB_CHAN_6G_145,
+ CALIB_CHAN_6G_149,
+ CALIB_CHAN_6G_153,
+ CALIB_CHAN_6G_157,
+ CALIB_CHAN_6G_161,
+ CALIB_CHAN_6G_165,
+ CALIB_CHAN_6G_169,
+ CALIB_CHAN_6G_173,
+ CALIB_CHAN_6G_177,
+ CALIB_CHAN_6G_181,
+ CALIB_CHAN_6G_185,
+ CALIB_CHAN_6G_189,
+ CALIB_CHAN_6G_193,
+ CALIB_CHAN_6G_197,
+ CALIB_CHAN_6G_201,
+ CALIB_CHAN_6G_205,
+ CALIB_CHAN_6G_209,
+ CALIB_CHAN_6G_213,
+ CALIB_CHAN_6G_217,
+ CALIB_CHAN_6G_221,
+ CALIB_CHAN_6G_225,
+ CALIB_CHAN_6G_229,
+ CALIB_CHAN_6G_233,
+ CALIB_CHAN_6G_MAX,
+};
+
+/* MAX(CALIB_CHAN_24G_MAX, CALIB_CHAN_5G_MAX, CALIB_CHAN_6G_MAX) */
+#define CALIB_CHAN_MAX CALIB_CHAN_6G_MAX
+
+struct cl_dcoc_calib {
+ s8 i;
+ s8 q;
+};
+
+struct cl_dcoc_report {
+ __le16 i_dc;
+ __le16 i_iterations;
+ __le16 q_dc;
+ __le16 q_iterations;
+};
+
+struct cl_iq_report {
+ u8 status;
+ __le16 amplitude_mismatch[IQ_NUM_TONES_CFM];
+ __le16 phase_mismatch[IQ_NUM_TONES_CFM];
+ s8 ir_db[LOOPS_MAX_NUM][IQ_NUM_TONES_CFM];
+ s8 ir_db_avg_post;
+};
+
+struct cl_iq_calib {
+ __le32 coef0;
+ __le32 coef1;
+ __le32 coef2;
+ __le32 gain;
+};
+
+struct cl_lolc_report {
+ u8 status;
+ u8 n_iter;
+ __le16 lolc_qual;
+};
+
+struct cl_gain_report {
+ u8 status;
+ u8 rx_gain;
+ u8 tx_gain;
+ u8 gain_quality;
+ __le16 final_p2p;
+ __le16 initial_p2p;
+};
+
+struct cl_iq_dcoc_conf {
+ bool dcoc_calib_needed[TCV_MAX];
+ u8 dcoc_file_num_ant[TCV_MAX];
+ bool iq_calib_needed;
+ u8 iq_file_num_ant[TCV_MAX];
+ bool force_calib;
+};
+
+struct cl_iq_dcoc_info {
+ struct cl_dcoc_calib dcoc[DCOC_LNA_GAIN_NUM][MAX_ANTENNAS];
+ struct cl_iq_calib iq_tx[MAX_ANTENNAS];
+ __le32 iq_tx_lolc[MAX_ANTENNAS];
+ struct cl_iq_calib iq_rx[MAX_ANTENNAS];
+};
+
+struct cl_iq_dcoc_report {
+ struct cl_dcoc_report dcoc[DCOC_LNA_GAIN_NUM][MAX_ANTENNAS];
+ struct cl_gain_report gain_tx[MAX_ANTENNAS];
+ struct cl_gain_report gain_rx[MAX_ANTENNAS];
+ struct cl_lolc_report lolc_report[MAX_ANTENNAS];
+ struct cl_iq_report iq_tx[MAX_ANTENNAS];
+ struct cl_iq_report iq_rx[MAX_ANTENNAS];
+};
+
+struct calib_cfm {
+ u8 status;
+ __le16 raw_bits_data_0;
+ __le16 raw_bits_data_1;
+};
+
+struct cl_iq_dcoc_data {
+ struct cl_iq_dcoc_info iq_dcoc_db;
+ struct cl_iq_dcoc_report report;
+ struct calib_cfm dcoc_iq_cfm[CALIB_CFM_MAX];
+};
+
+struct cl_iq_dcoc_data_info {
+ struct cl_iq_dcoc_data *iq_dcoc_data;
+ u32 dma_addr;
+};
+
+struct cl_calib_params {
+ u8 mode;
+ bool first_channel;
+ s8 sx_freq_offset_mhz;
+ u32 plan_bitmap;
+};
+
+struct cl_calib_work {
+ struct work_struct ws;
+ struct cl_hw *cl_hw;
+};
+
+struct cl_calib_chain {
+ u8 pair;
+ u8 initial_tx_gain;
+ u8 initial_rx_gain;
+};
+
+enum cl_calib_flags {
+ CALIB_FLAG_CREATE = 1 << 0,
+ CALIB_FLAG_VERSION = 1 << 1,
+ CALIB_FLAG_TITLE = 1 << 2,
+ CALIB_FLAG_HEADER_TCV0 = 1 << 3,
+ CALIB_FLAG_HEADER_TCV1 = 1 << 4,
+
+ CALIB_FLAG_HEADER_TCV01 = (CALIB_FLAG_HEADER_TCV0 |
+ CALIB_FLAG_HEADER_TCV1),
+ CALIB_FLAG_ALL_REPORT = (CALIB_FLAG_CREATE |
+ CALIB_FLAG_VERSION |
+ CALIB_FLAG_TITLE),
+ CALIB_FLAG_ALL = (CALIB_FLAG_CREATE |
+ CALIB_FLAG_VERSION |
+ CALIB_FLAG_TITLE |
+ CALIB_FLAG_HEADER_TCV0 |
+ CALIB_FLAG_HEADER_TCV1)
+};
+
+struct cl_calib_file_flags {
+ u8 dcoc;
+ u8 dcoc_report;
+ u8 lolc;
+ u8 lolc_report;
+ u8 iq_tx;
+ u8 iq_tx_report;
+ u8 iq_rx;
+ u8 iq_rx_report;
+ u8 rx_gain_report;
+ bool iq_plan;
+};
+
+struct cl_calib_errors {
+ u16 dcoc;
+ u16 lolc;
+ u16 iq_tx;
+ u16 iq_rx;
+};
+
+struct cl_calib_db {
+ struct cl_dcoc_calib
+ dcoc[TCV_MAX][CALIB_CHAN_MAX][CHNL_BW_MAX][MAX_SX][MAX_ANTENNAS][DCOC_LNA_GAIN_NUM];
+ u32 iq_tx_lolc[TCV_MAX][CALIB_CHAN_MAX][CHNL_BW_MAX][MAX_SX][MAX_ANTENNAS];
+ struct cl_iq_calib iq_tx[TCV_MAX][CALIB_CHAN_MAX][CHNL_BW_MAX][MAX_SX][MAX_ANTENNAS];
+ struct cl_iq_calib iq_rx[TCV_MAX][CALIB_CHAN_MAX][CHNL_BW_MAX][MAX_SX][MAX_ANTENNAS];
+ struct cl_calib_file_flags file_flags;
+ struct cl_calib_errors errors[TCV_MAX];
+ struct list_head plan[TCV_MAX][CALIB_CHAN_MAX][CHNL_BW_MAX];
+ bool is_plan_initialized;
+};
+
+#define SET_PHY_DATA_FLAGS_DCOC 0x1 /* Set DCOC calibration data.*/
+#define SET_PHY_DATA_FLAGS_IQ_TX 0x2 /* Set IQ Tx calibration data.*/
+#define SET_PHY_DATA_FLAGS_IQ_RX 0x4 /* Set IQ Rx calibration data.*/
+#define SET_PHY_DATA_FLAGS_IQ_TX_LOLC 0x8 /* Set IQ Tx LOLC calibration data.*/
+#define SET_PHY_DATA_FLAGS_ALL ( \
+ SET_PHY_DATA_FLAGS_DCOC | \
+ SET_PHY_DATA_FLAGS_IQ_TX | \
+ SET_PHY_DATA_FLAGS_IQ_RX | \
+ SET_PHY_DATA_FLAGS_IQ_TX_LOLC)
+#define SET_PHY_DATA_FLAGS_LISTENER ( \
+ SET_PHY_DATA_FLAGS_DCOC | \
+ SET_PHY_DATA_FLAGS_IQ_RX)
+
+#define CL_CALIB_PARAMS_DEFAULT_STRUCT \
+ ((struct cl_calib_params){SET_CHANNEL_MODE_OPERETIONAL, false, 0, 0})
+
+#define CALIB_CHAN_5G_PLAN 6
+#define CALIB_CHAN_6G_PLAN 15
+
+struct cl_chip;
+
+void cl_calib_dcoc_init_calibration(struct cl_hw *cl_hw);
+u8 cl_calib_dcoc_channel_bw_to_idx(struct cl_hw *cl_hw, u8 channel, u8 bw);
+void cl_calib_dcoc_fill_data(struct cl_hw *cl_hw, struct cl_iq_dcoc_info *iq_dcoc_db);
+u8 cl_calib_dcoc_tcv_channel_to_idx(struct cl_chip *chip, u8 tcv_idx, u8 channel, u8 bw);
+void cl_calib_dcoc_handle_set_channel_cfm(struct cl_hw *cl_hw, bool first_channel);
+void cl_calib_common_start_work(struct cl_hw *cl_hw);
+void cl_calib_common_fill_phy_data(struct cl_hw *cl_hw, struct cl_iq_dcoc_info *iq_dcoc_db,
+ u8 flags);
+int cl_calib_common_tables_alloc(struct cl_hw *cl_hw);
+void cl_calib_common_tables_free(struct cl_hw *cl_hw);
+int cl_calib_common_handle_set_channel_cfm(struct cl_hw *cl_hw,
+ struct cl_calib_params calib_params);
+int cl_calib_common_check_errors(struct cl_hw *cl_hw);
+s16 cl_calib_common_get_temperature(struct cl_hw *cl_hw, u8 cfm_type);
+
+/* Calibration constants */
+#define CALIB_TX_GAIN_DEFAULT (0x75)
+#define GAIN_SLEEVE_TRSHLD_DEFAULT (2)
+#define CALIB_NCO_AMP_DEFAULT (-10)
+#define CALIB_NCO_FREQ_DEFAULT (16) /* 5M/312.5K for LO & RGC */
+#define LO_P_THRESH (1000000)
+#define N_SAMPLES_EXP_LOLC (13)
+#define N_SAMPLES_EXP_IQC (13)
+#define N_BIT_FIR_SCALE (11)
+#define N_BIT_AMP_SCALE (10)
+#define N_BIT_PHASE_SCALE (10)
+#define GP_RAD_TRSHLD_DEFAULT (1144) /* Represents 1 degree in Q(16,16): 1*(pi/180) */
+#define GA_LIN_UPPER_TRSHLD_DEFAULT (66295) /* Represents 0.1 db in Q(16,16): 10^( 0.1/20)*2^16 */
+#define GA_LIN_LOWER_TRSHLD_DEFAULT (64786) /* Represents -0.1 db in Q(16,16): 10^(-0.1/20)*2^16 */
+#define COMP_FILTER_LEN_DEFAULT (9)
+#define SINGLETONS_NUM_DEFAULT (10) /* Set to SINGLETONS_MAX_NUM for now */
+#define IQ_POST_IDX (LOOPS_MAX_NUM - 1)
+#define RAMPUP_TIME (50)
+#define LO_COARSE_STEP (20)
+#define LO_FINE_STEP (1)
+
+#define DCOC_MAX_VGA 0x14
+#define CALIB_RX_GAIN_DEFAULT 0x83
+#define CALIB_RX_GAIN_UPPER_LIMIT 0x14
+#define CALIB_RX_GAIN_LOWER_LIMIT 0x0
+#define DCOC_MAX_VGA_ATHOS 0x1E
+#define CALIB_RX_GAIN_DEFAULT_ATHOS 0x8D
+#define CALIB_RX_GAIN_UPPER_LIMIT_ATHOS 0x1E
+#define CALIB_RX_GAIN_LOWER_LIMIT_ATHOS 0x0A
+#define DCOC_MAX_VGA_ATHOS_B 0x14
+#define CALIB_RX_GAIN_DEFAULT_ATHOS_B 0x81
+#define CALIB_RX_GAIN_UPPER_LIMIT_ATHOS_B 0x14
+#define CALIB_RX_GAIN_LOWER_LIMIT_ATHOS_B 0x0
+
+struct cl_calib_iq_restore {
+ u8 bw;
+ u32 primary;
+ u32 center;
+ u8 channel;
+};
+
+bool cl_calib_iq_calibration_needed(struct cl_hw *cl_hw);
+void cl_calib_iq_file_flags_clear(struct cl_chip *chip);
+void cl_calib_iq_file_flags_set(struct cl_chip *chip);
+int cl_calib_iq_post_read_actions(struct cl_chip *chip, char *buf);
+void cl_calib_iq_init_calibration(struct cl_hw *cl_hw);
+void cl_calib_iq_fill_data(struct cl_hw *cl_hw, struct cl_iq_calib *iq_data,
+ struct cl_iq_calib *iq_chip_data);
+void cl_calib_iq_lolc_fill_data(struct cl_hw *cl_hw, __le32 *iq_lolc);
+void cl_calib_iq_handle_set_channel_cfm(struct cl_hw *cl_hw, u8 plan_bitmap);
+void cl_calib_iq_lolc_handle_set_channel_cfm(struct cl_hw *cl_hw, u8 plan_bitmap);
+int cl_calib_iq_lolc_write_version(struct cl_hw *cl_hw);
+int cl_calib_iq_lolc_report_write_version(struct cl_hw *cl_hw);
+int cl_calib_iq_lolc_write_file(struct cl_hw *cl_hw, s32 *params);
+int cl_calib_iq_lolc_report_write_file(struct cl_hw *cl_hw, s32 *params);
+void cl_calib_iq_get_tone_vector(struct cl_hw *cl_hw, __le16 *tone_vector);
+void cl_calib_iq_init_production(struct cl_hw *cl_hw);
+int cl_calib_iq_set_idle(struct cl_hw *cl_hw, bool idle);
+void cl_calib_restore_channel(struct cl_hw *cl_hw, struct cl_calib_iq_restore *iq_restore);
+void cl_calib_save_channel(struct cl_hw *cl_hw, struct cl_calib_iq_restore *iq_restore);
+
+#define UNCALIBRATED_POWER 15
+#define UNCALIBRATED_POWER_OFFSET 0
+#define UNCALIBRATED_TEMPERATURE 35
+
+struct point;
+void cl_calib_power_read(struct cl_hw *cl_hw);
+void cl_calib_power_offset_fill(struct cl_hw *cl_hw, u8 channel,
+ u8 bw, u8 offset[MAX_ANTENNAS]);
+int cl_calib_runtime_and_switch_channel(struct cl_hw *cl_hw, u32 channel, u8 bw, u32 primary,
+ u32 center);
+void cl_calib_runtime_work(struct cl_hw *cl_hw, u32 channel, u8 bw, u16 primary, u16 center);
+bool cl_calib_runtime_is_allowed(struct cl_hw *cl_hw);
+
+#endif /* CL_CALIB_H */
--
2.36.1


2022-05-24 15:40:47

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 04/96] cl8k: add Makefile

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/Makefile | 66 +++++++++++++++++++++++
1 file changed, 66 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/Makefile

diff --git a/drivers/net/wireless/celeno/cl8k/Makefile b/drivers/net/wireless/celeno/cl8k/Makefile
new file mode 100644
index 000000000000..9ff98cda5261
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/Makefile
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+obj-$(CONFIG_CL8K) += cl8k.o
+
+ccflags-y += -I$(src) -I$(srctree)/net/wireless -I$(srctree)/net/mac80211/
+ccflags-y += -I$(src) -Werror
+
+define cc_ver_cmp
+$(shell [ "$$($(CC) -dumpversion | cut -d. -f1)" -$(1) "$(2)" ] && echo "true" || echo "false")
+endef
+
+ifeq ($(call cc_ver_cmp,ge,8),true)
+ccflags-y += -Wno-error=stringop-truncation
+ccflags-y += -Wno-error=format-truncation
+endif
+
+# Stop these C90 warnings. We use C99.
+ccflags-y += -Wno-declaration-after-statement -g
+
+cl-objs += \
+ wrs.o \
+ phy.o \
+ key.o \
+ sta.o \
+ hw.o \
+ chip.o \
+ fw.o \
+ utils.o \
+ channel.o \
+ rx.o \
+ tx.o \
+ main.o \
+ mac_addr.o \
+ ampdu.o \
+ dfs.o \
+ enhanced_tim.o \
+ e2p.o \
+ calib.o \
+ stats.o \
+ power.o \
+ motion_sense.o \
+ bf.o \
+ sounding.o \
+ debug.o \
+ temperature.o \
+ recovery.o \
+ rates.o \
+ radio.o \
+ config.o \
+ tcv.o \
+ traffic.o \
+ vns.o \
+ maintenance.o \
+ ela.o \
+ rfic.o \
+ vif.o \
+ dsp.o \
+ pci.o \
+ version.o \
+ regdom.o \
+ mac80211.o \
+ platform.o \
+ scan.o
+
+ifneq ($(CONFIG_CL8K),)
+cl8k-y += $(cl-objs)
+endif
--
2.36.1


2022-05-24 16:00:04

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 43/96] cl8k: add main.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/main.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/main.h

diff --git a/drivers/net/wireless/celeno/cl8k/main.h b/drivers/net/wireless/celeno/cl8k/main.h
new file mode 100644
index 000000000000..69f4ae599902
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/main.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_MAIN_H
+#define CL_MAIN_H
+
+#include "chip.h"
+#include "hw.h"
+
+int cl_main_init(struct cl_chip *chip, const struct cl_driver_ops *drv_ops);
+void cl_main_deinit(struct cl_chip *chip);
+void cl_main_reset(struct cl_chip *chip, struct cl_controller_reg *controller_reg);
+int cl_main_on(struct cl_hw *cl_hw);
+void cl_main_off(struct cl_hw *cl_hw);
+
+#endif /* CL_MAIN_H */
--
2.36.1


2022-05-24 16:01:33

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 11/96] cl8k: add channel.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/channel.c | 1656 ++++++++++++++++++++
1 file changed, 1656 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/channel.c

diff --git a/drivers/net/wireless/celeno/cl8k/channel.c b/drivers/net/wireless/celeno/cl8k/channel.c
new file mode 100644
index 000000000000..777c5f749059
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/channel.c
@@ -0,0 +1,1656 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "vif.h"
+#include "dfs.h"
+#include "reg/reg_defs.h"
+#include "hw.h"
+#include "utils.h"
+#include "channel.h"
+
+#define CASE_CHAN2BITMAP_IDX_6G(_chan) { case _chan: return (b6g_ch ## _chan); }
+#define CASE_CHAN2EXT_IDX_6G(_chan) { case _chan: return (ext_b6g_ch ## _chan); }
+#define CASE_CHAN2IDX_5G(_chan) { case _chan: return (b5g_ch ## _chan); }
+#define CASE_CHAN2IDX_2G(_chan) { case _chan: return (b24g_ch ## _chan); }
+
+#define CASE_BITMAP_IDX2FREQ_6G(_chan) { case (b6g_ch ## _chan): return FREQ6G(_chan); }
+#define CASE_EXT_IDX2FREQ_6G(_chan) { case (ext_b6g_ch ## _chan): return FREQ6G(_chan); }
+#define CASE_IDX2FREQ_5G(_chan) { case (b5g_ch ## _chan): return FREQ5G(_chan); }
+#define CASE_IDX2FREQ_2G(_chan) { case (b24g_ch ## _chan): return FREQ2G(_chan); }
+
+#define INVALID_FREQ 0xffff
+
+static u8 cl_channel_to_bitmap_index_6g(struct cl_hw *cl_hw, u32 channel)
+{
+ switch (channel) {
+ CASE_CHAN2BITMAP_IDX_6G(1);
+ CASE_CHAN2BITMAP_IDX_6G(2);
+ CASE_CHAN2BITMAP_IDX_6G(5);
+ CASE_CHAN2BITMAP_IDX_6G(9);
+ CASE_CHAN2BITMAP_IDX_6G(13);
+ CASE_CHAN2BITMAP_IDX_6G(17);
+ CASE_CHAN2BITMAP_IDX_6G(21);
+ CASE_CHAN2BITMAP_IDX_6G(25);
+ CASE_CHAN2BITMAP_IDX_6G(29);
+ CASE_CHAN2BITMAP_IDX_6G(33);
+ CASE_CHAN2BITMAP_IDX_6G(37);
+ CASE_CHAN2BITMAP_IDX_6G(41);
+ CASE_CHAN2BITMAP_IDX_6G(45);
+ CASE_CHAN2BITMAP_IDX_6G(49);
+ CASE_CHAN2BITMAP_IDX_6G(53);
+ CASE_CHAN2BITMAP_IDX_6G(57);
+ CASE_CHAN2BITMAP_IDX_6G(61);
+ CASE_CHAN2BITMAP_IDX_6G(65);
+ CASE_CHAN2BITMAP_IDX_6G(69);
+ CASE_CHAN2BITMAP_IDX_6G(73);
+ CASE_CHAN2BITMAP_IDX_6G(77);
+ CASE_CHAN2BITMAP_IDX_6G(81);
+ CASE_CHAN2BITMAP_IDX_6G(85);
+ CASE_CHAN2BITMAP_IDX_6G(89);
+ CASE_CHAN2BITMAP_IDX_6G(93);
+ CASE_CHAN2BITMAP_IDX_6G(97);
+ CASE_CHAN2BITMAP_IDX_6G(101);
+ CASE_CHAN2BITMAP_IDX_6G(105);
+ CASE_CHAN2BITMAP_IDX_6G(109);
+ CASE_CHAN2BITMAP_IDX_6G(113);
+ CASE_CHAN2BITMAP_IDX_6G(117);
+ CASE_CHAN2BITMAP_IDX_6G(121);
+ CASE_CHAN2BITMAP_IDX_6G(125);
+ CASE_CHAN2BITMAP_IDX_6G(129);
+ CASE_CHAN2BITMAP_IDX_6G(133);
+ CASE_CHAN2BITMAP_IDX_6G(137);
+ CASE_CHAN2BITMAP_IDX_6G(141);
+ CASE_CHAN2BITMAP_IDX_6G(145);
+ CASE_CHAN2BITMAP_IDX_6G(149);
+ CASE_CHAN2BITMAP_IDX_6G(153);
+ CASE_CHAN2BITMAP_IDX_6G(157);
+ CASE_CHAN2BITMAP_IDX_6G(161);
+ CASE_CHAN2BITMAP_IDX_6G(165);
+ CASE_CHAN2BITMAP_IDX_6G(169);
+ CASE_CHAN2BITMAP_IDX_6G(173);
+ CASE_CHAN2BITMAP_IDX_6G(177);
+ CASE_CHAN2BITMAP_IDX_6G(181);
+ CASE_CHAN2BITMAP_IDX_6G(185);
+ CASE_CHAN2BITMAP_IDX_6G(189);
+ CASE_CHAN2BITMAP_IDX_6G(193);
+ CASE_CHAN2BITMAP_IDX_6G(197);
+ CASE_CHAN2BITMAP_IDX_6G(201);
+ CASE_CHAN2BITMAP_IDX_6G(205);
+ CASE_CHAN2BITMAP_IDX_6G(209);
+ CASE_CHAN2BITMAP_IDX_6G(213);
+ CASE_CHAN2BITMAP_IDX_6G(217);
+ CASE_CHAN2BITMAP_IDX_6G(221);
+ CASE_CHAN2BITMAP_IDX_6G(225);
+ CASE_CHAN2BITMAP_IDX_6G(229);
+ CASE_CHAN2BITMAP_IDX_6G(233);
+ };
+
+ return INVALID_CHAN_IDX;
+}
+
+u8 cl_channel_to_ext_index_6g(struct cl_hw *cl_hw, u32 channel)
+{
+ switch (channel) {
+ CASE_CHAN2EXT_IDX_6G(1);
+ CASE_CHAN2EXT_IDX_6G(2);
+ CASE_CHAN2EXT_IDX_6G(3);
+ CASE_CHAN2EXT_IDX_6G(5);
+ CASE_CHAN2EXT_IDX_6G(7);
+ CASE_CHAN2EXT_IDX_6G(9);
+ CASE_CHAN2EXT_IDX_6G(11);
+ CASE_CHAN2EXT_IDX_6G(13);
+ CASE_CHAN2EXT_IDX_6G(15);
+ CASE_CHAN2EXT_IDX_6G(17);
+ CASE_CHAN2EXT_IDX_6G(19);
+ CASE_CHAN2EXT_IDX_6G(21);
+ CASE_CHAN2EXT_IDX_6G(23);
+ CASE_CHAN2EXT_IDX_6G(25);
+ CASE_CHAN2EXT_IDX_6G(27);
+ CASE_CHAN2EXT_IDX_6G(29);
+ CASE_CHAN2EXT_IDX_6G(31);
+ CASE_CHAN2EXT_IDX_6G(33);
+ CASE_CHAN2EXT_IDX_6G(35);
+ CASE_CHAN2EXT_IDX_6G(37);
+ CASE_CHAN2EXT_IDX_6G(39);
+ CASE_CHAN2EXT_IDX_6G(41);
+ CASE_CHAN2EXT_IDX_6G(43);
+ CASE_CHAN2EXT_IDX_6G(45);
+ CASE_CHAN2EXT_IDX_6G(47);
+ CASE_CHAN2EXT_IDX_6G(49);
+ CASE_CHAN2EXT_IDX_6G(51);
+ CASE_CHAN2EXT_IDX_6G(53);
+ CASE_CHAN2EXT_IDX_6G(55);
+ CASE_CHAN2EXT_IDX_6G(57);
+ CASE_CHAN2EXT_IDX_6G(59);
+ CASE_CHAN2EXT_IDX_6G(61);
+ CASE_CHAN2EXT_IDX_6G(63);
+ CASE_CHAN2EXT_IDX_6G(65);
+ CASE_CHAN2EXT_IDX_6G(67);
+ CASE_CHAN2EXT_IDX_6G(69);
+ CASE_CHAN2EXT_IDX_6G(71);
+ CASE_CHAN2EXT_IDX_6G(73);
+ CASE_CHAN2EXT_IDX_6G(75);
+ CASE_CHAN2EXT_IDX_6G(77);
+ CASE_CHAN2EXT_IDX_6G(79);
+ CASE_CHAN2EXT_IDX_6G(81);
+ CASE_CHAN2EXT_IDX_6G(83);
+ CASE_CHAN2EXT_IDX_6G(85);
+ CASE_CHAN2EXT_IDX_6G(87);
+ CASE_CHAN2EXT_IDX_6G(89);
+ CASE_CHAN2EXT_IDX_6G(91);
+ CASE_CHAN2EXT_IDX_6G(93);
+ CASE_CHAN2EXT_IDX_6G(95);
+ CASE_CHAN2EXT_IDX_6G(97);
+ CASE_CHAN2EXT_IDX_6G(99);
+ CASE_CHAN2EXT_IDX_6G(101);
+ CASE_CHAN2EXT_IDX_6G(103);
+ CASE_CHAN2EXT_IDX_6G(105);
+ CASE_CHAN2EXT_IDX_6G(107);
+ CASE_CHAN2EXT_IDX_6G(109);
+ CASE_CHAN2EXT_IDX_6G(111);
+ CASE_CHAN2EXT_IDX_6G(113);
+ CASE_CHAN2EXT_IDX_6G(115);
+ CASE_CHAN2EXT_IDX_6G(117);
+ CASE_CHAN2EXT_IDX_6G(119);
+ CASE_CHAN2EXT_IDX_6G(121);
+ CASE_CHAN2EXT_IDX_6G(123);
+ CASE_CHAN2EXT_IDX_6G(125);
+ CASE_CHAN2EXT_IDX_6G(127);
+ CASE_CHAN2EXT_IDX_6G(129);
+ CASE_CHAN2EXT_IDX_6G(131);
+ CASE_CHAN2EXT_IDX_6G(133);
+ CASE_CHAN2EXT_IDX_6G(135);
+ CASE_CHAN2EXT_IDX_6G(137);
+ CASE_CHAN2EXT_IDX_6G(139);
+ CASE_CHAN2EXT_IDX_6G(141);
+ CASE_CHAN2EXT_IDX_6G(143);
+ CASE_CHAN2EXT_IDX_6G(145);
+ CASE_CHAN2EXT_IDX_6G(147);
+ CASE_CHAN2EXT_IDX_6G(149);
+ CASE_CHAN2EXT_IDX_6G(151);
+ CASE_CHAN2EXT_IDX_6G(153);
+ CASE_CHAN2EXT_IDX_6G(155);
+ CASE_CHAN2EXT_IDX_6G(157);
+ CASE_CHAN2EXT_IDX_6G(159);
+ CASE_CHAN2EXT_IDX_6G(161);
+ CASE_CHAN2EXT_IDX_6G(163);
+ CASE_CHAN2EXT_IDX_6G(165);
+ CASE_CHAN2EXT_IDX_6G(167);
+ CASE_CHAN2EXT_IDX_6G(169);
+ CASE_CHAN2EXT_IDX_6G(171);
+ CASE_CHAN2EXT_IDX_6G(173);
+ CASE_CHAN2EXT_IDX_6G(175);
+ CASE_CHAN2EXT_IDX_6G(177);
+ CASE_CHAN2EXT_IDX_6G(179);
+ CASE_CHAN2EXT_IDX_6G(181);
+ CASE_CHAN2EXT_IDX_6G(183);
+ CASE_CHAN2EXT_IDX_6G(185);
+ CASE_CHAN2EXT_IDX_6G(187);
+ CASE_CHAN2EXT_IDX_6G(189);
+ CASE_CHAN2EXT_IDX_6G(191);
+ CASE_CHAN2EXT_IDX_6G(193);
+ CASE_CHAN2EXT_IDX_6G(195);
+ CASE_CHAN2EXT_IDX_6G(197);
+ CASE_CHAN2EXT_IDX_6G(199);
+ CASE_CHAN2EXT_IDX_6G(201);
+ CASE_CHAN2EXT_IDX_6G(203);
+ CASE_CHAN2EXT_IDX_6G(205);
+ CASE_CHAN2EXT_IDX_6G(207);
+ CASE_CHAN2EXT_IDX_6G(209);
+ CASE_CHAN2EXT_IDX_6G(211);
+ CASE_CHAN2EXT_IDX_6G(213);
+ CASE_CHAN2EXT_IDX_6G(215);
+ CASE_CHAN2EXT_IDX_6G(217);
+ CASE_CHAN2EXT_IDX_6G(219);
+ CASE_CHAN2EXT_IDX_6G(221);
+ CASE_CHAN2EXT_IDX_6G(223);
+ CASE_CHAN2EXT_IDX_6G(225);
+ CASE_CHAN2EXT_IDX_6G(227);
+ CASE_CHAN2EXT_IDX_6G(229);
+ CASE_CHAN2EXT_IDX_6G(231);
+ CASE_CHAN2EXT_IDX_6G(233);
+ };
+
+ return INVALID_CHAN_IDX;
+}
+
+static u8 cl_channel_to_index_5g(struct cl_hw *cl_hw, u32 channel)
+{
+ switch (channel) {
+ CASE_CHAN2IDX_5G(36);
+ CASE_CHAN2IDX_5G(38);
+ CASE_CHAN2IDX_5G(40);
+ CASE_CHAN2IDX_5G(42);
+ CASE_CHAN2IDX_5G(44);
+ CASE_CHAN2IDX_5G(46);
+ CASE_CHAN2IDX_5G(48);
+ CASE_CHAN2IDX_5G(50);
+ CASE_CHAN2IDX_5G(52);
+ CASE_CHAN2IDX_5G(54);
+ CASE_CHAN2IDX_5G(56);
+ CASE_CHAN2IDX_5G(58);
+ CASE_CHAN2IDX_5G(60);
+ CASE_CHAN2IDX_5G(62);
+ CASE_CHAN2IDX_5G(64);
+ CASE_CHAN2IDX_5G(100);
+ CASE_CHAN2IDX_5G(102);
+ CASE_CHAN2IDX_5G(104);
+ CASE_CHAN2IDX_5G(106);
+ CASE_CHAN2IDX_5G(108);
+ CASE_CHAN2IDX_5G(110);
+ CASE_CHAN2IDX_5G(112);
+ CASE_CHAN2IDX_5G(114);
+ CASE_CHAN2IDX_5G(116);
+ CASE_CHAN2IDX_5G(118);
+ CASE_CHAN2IDX_5G(120);
+ CASE_CHAN2IDX_5G(122);
+ CASE_CHAN2IDX_5G(124);
+ CASE_CHAN2IDX_5G(126);
+ CASE_CHAN2IDX_5G(128);
+ /* 130 - invalid */
+ CASE_CHAN2IDX_5G(132);
+ CASE_CHAN2IDX_5G(134);
+ CASE_CHAN2IDX_5G(136);
+ CASE_CHAN2IDX_5G(138);
+ CASE_CHAN2IDX_5G(140);
+ CASE_CHAN2IDX_5G(142);
+ CASE_CHAN2IDX_5G(144);
+ CASE_CHAN2IDX_5G(149);
+ CASE_CHAN2IDX_5G(151);
+ CASE_CHAN2IDX_5G(153);
+ CASE_CHAN2IDX_5G(155);
+ CASE_CHAN2IDX_5G(157);
+ CASE_CHAN2IDX_5G(159);
+ CASE_CHAN2IDX_5G(161);
+ /* 163 - invalid */
+ CASE_CHAN2IDX_5G(165);
+ };
+
+ return INVALID_CHAN_IDX;
+}
+
+static u8 cl_channel_to_index_24g(struct cl_hw *cl_hw, u32 channel)
+{
+ switch (channel) {
+ CASE_CHAN2IDX_2G(1);
+ CASE_CHAN2IDX_2G(2);
+ CASE_CHAN2IDX_2G(3);
+ CASE_CHAN2IDX_2G(4);
+ CASE_CHAN2IDX_2G(5);
+ CASE_CHAN2IDX_2G(6);
+ CASE_CHAN2IDX_2G(7);
+ CASE_CHAN2IDX_2G(8);
+ CASE_CHAN2IDX_2G(9);
+ CASE_CHAN2IDX_2G(10);
+ CASE_CHAN2IDX_2G(11);
+ CASE_CHAN2IDX_2G(12);
+ CASE_CHAN2IDX_2G(13);
+ CASE_CHAN2IDX_2G(14);
+ };
+
+ return INVALID_CHAN_IDX;
+}
+
+u8 cl_channel_to_index(struct cl_hw *cl_hw, u32 channel)
+{
+ /* Calculate index for a given channel */
+ if (cl_band_is_6g(cl_hw))
+ return cl_channel_to_ext_index_6g(cl_hw, channel);
+ else if (cl_band_is_5g(cl_hw))
+ return cl_channel_to_index_5g(cl_hw, channel);
+ else
+ return cl_channel_to_index_24g(cl_hw, channel);
+}
+
+u8 cl_channel_to_bitmap_index(struct cl_hw *cl_hw, u32 channel)
+{
+ /* Calculate index for a given channel */
+ if (cl_band_is_6g(cl_hw))
+ return cl_channel_to_bitmap_index_6g(cl_hw, channel);
+ else if (cl_band_is_5g(cl_hw))
+ return cl_channel_to_index_5g(cl_hw, channel);
+ else
+ return cl_channel_to_index_24g(cl_hw, channel);
+}
+
+static u16 cl_channel_bitmap_idx_to_freq_6g(struct cl_hw *cl_hw, u8 index)
+{
+ switch (index) {
+ CASE_BITMAP_IDX2FREQ_6G(1);
+ CASE_BITMAP_IDX2FREQ_6G(2);
+ CASE_BITMAP_IDX2FREQ_6G(5);
+ CASE_BITMAP_IDX2FREQ_6G(9);
+ CASE_BITMAP_IDX2FREQ_6G(13);
+ CASE_BITMAP_IDX2FREQ_6G(17);
+ CASE_BITMAP_IDX2FREQ_6G(21);
+ CASE_BITMAP_IDX2FREQ_6G(25);
+ CASE_BITMAP_IDX2FREQ_6G(29);
+ CASE_BITMAP_IDX2FREQ_6G(33);
+ CASE_BITMAP_IDX2FREQ_6G(37);
+ CASE_BITMAP_IDX2FREQ_6G(41);
+ CASE_BITMAP_IDX2FREQ_6G(45);
+ CASE_BITMAP_IDX2FREQ_6G(49);
+ CASE_BITMAP_IDX2FREQ_6G(53);
+ CASE_BITMAP_IDX2FREQ_6G(57);
+ CASE_BITMAP_IDX2FREQ_6G(61);
+ CASE_BITMAP_IDX2FREQ_6G(65);
+ CASE_BITMAP_IDX2FREQ_6G(69);
+ CASE_BITMAP_IDX2FREQ_6G(73);
+ CASE_BITMAP_IDX2FREQ_6G(77);
+ CASE_BITMAP_IDX2FREQ_6G(81);
+ CASE_BITMAP_IDX2FREQ_6G(85);
+ CASE_BITMAP_IDX2FREQ_6G(89);
+ CASE_BITMAP_IDX2FREQ_6G(93);
+ CASE_BITMAP_IDX2FREQ_6G(97);
+ CASE_BITMAP_IDX2FREQ_6G(101);
+ CASE_BITMAP_IDX2FREQ_6G(105);
+ CASE_BITMAP_IDX2FREQ_6G(109);
+ CASE_BITMAP_IDX2FREQ_6G(113);
+ CASE_BITMAP_IDX2FREQ_6G(117);
+ CASE_BITMAP_IDX2FREQ_6G(121);
+ CASE_BITMAP_IDX2FREQ_6G(125);
+ CASE_BITMAP_IDX2FREQ_6G(129);
+ CASE_BITMAP_IDX2FREQ_6G(133);
+ CASE_BITMAP_IDX2FREQ_6G(137);
+ CASE_BITMAP_IDX2FREQ_6G(141);
+ CASE_BITMAP_IDX2FREQ_6G(145);
+ CASE_BITMAP_IDX2FREQ_6G(149);
+ CASE_BITMAP_IDX2FREQ_6G(153);
+ CASE_BITMAP_IDX2FREQ_6G(157);
+ CASE_BITMAP_IDX2FREQ_6G(161);
+ CASE_BITMAP_IDX2FREQ_6G(165);
+ CASE_BITMAP_IDX2FREQ_6G(169);
+ CASE_BITMAP_IDX2FREQ_6G(173);
+ CASE_BITMAP_IDX2FREQ_6G(177);
+ CASE_BITMAP_IDX2FREQ_6G(181);
+ CASE_BITMAP_IDX2FREQ_6G(185);
+ CASE_BITMAP_IDX2FREQ_6G(189);
+ CASE_BITMAP_IDX2FREQ_6G(193);
+ CASE_BITMAP_IDX2FREQ_6G(197);
+ CASE_BITMAP_IDX2FREQ_6G(201);
+ CASE_BITMAP_IDX2FREQ_6G(205);
+ CASE_BITMAP_IDX2FREQ_6G(209);
+ CASE_BITMAP_IDX2FREQ_6G(213);
+ CASE_BITMAP_IDX2FREQ_6G(217);
+ CASE_BITMAP_IDX2FREQ_6G(221);
+ CASE_BITMAP_IDX2FREQ_6G(225);
+ CASE_BITMAP_IDX2FREQ_6G(229);
+ CASE_BITMAP_IDX2FREQ_6G(233);
+ };
+
+ return INVALID_FREQ;
+}
+
+u16 cl_channel_ext_idx_to_freq_6g(struct cl_hw *cl_hw, u8 index)
+{
+ switch (index) {
+ CASE_EXT_IDX2FREQ_6G(1);
+ CASE_EXT_IDX2FREQ_6G(2);
+ CASE_EXT_IDX2FREQ_6G(3);
+ CASE_EXT_IDX2FREQ_6G(5);
+ CASE_EXT_IDX2FREQ_6G(7);
+ CASE_EXT_IDX2FREQ_6G(9);
+ CASE_EXT_IDX2FREQ_6G(11);
+ CASE_EXT_IDX2FREQ_6G(13);
+ CASE_EXT_IDX2FREQ_6G(15);
+ CASE_EXT_IDX2FREQ_6G(17);
+ CASE_EXT_IDX2FREQ_6G(19);
+ CASE_EXT_IDX2FREQ_6G(21);
+ CASE_EXT_IDX2FREQ_6G(23);
+ CASE_EXT_IDX2FREQ_6G(25);
+ CASE_EXT_IDX2FREQ_6G(27);
+ CASE_EXT_IDX2FREQ_6G(29);
+ CASE_EXT_IDX2FREQ_6G(31);
+ CASE_EXT_IDX2FREQ_6G(33);
+ CASE_EXT_IDX2FREQ_6G(35);
+ CASE_EXT_IDX2FREQ_6G(37);
+ CASE_EXT_IDX2FREQ_6G(39);
+ CASE_EXT_IDX2FREQ_6G(41);
+ CASE_EXT_IDX2FREQ_6G(43);
+ CASE_EXT_IDX2FREQ_6G(45);
+ CASE_EXT_IDX2FREQ_6G(47);
+ CASE_EXT_IDX2FREQ_6G(49);
+ CASE_EXT_IDX2FREQ_6G(51);
+ CASE_EXT_IDX2FREQ_6G(53);
+ CASE_EXT_IDX2FREQ_6G(55);
+ CASE_EXT_IDX2FREQ_6G(57);
+ CASE_EXT_IDX2FREQ_6G(59);
+ CASE_EXT_IDX2FREQ_6G(61);
+ CASE_EXT_IDX2FREQ_6G(63);
+ CASE_EXT_IDX2FREQ_6G(65);
+ CASE_EXT_IDX2FREQ_6G(67);
+ CASE_EXT_IDX2FREQ_6G(69);
+ CASE_EXT_IDX2FREQ_6G(71);
+ CASE_EXT_IDX2FREQ_6G(73);
+ CASE_EXT_IDX2FREQ_6G(75);
+ CASE_EXT_IDX2FREQ_6G(77);
+ CASE_EXT_IDX2FREQ_6G(79);
+ CASE_EXT_IDX2FREQ_6G(81);
+ CASE_EXT_IDX2FREQ_6G(83);
+ CASE_EXT_IDX2FREQ_6G(85);
+ CASE_EXT_IDX2FREQ_6G(87);
+ CASE_EXT_IDX2FREQ_6G(89);
+ CASE_EXT_IDX2FREQ_6G(91);
+ CASE_EXT_IDX2FREQ_6G(93);
+ CASE_EXT_IDX2FREQ_6G(95);
+ CASE_EXT_IDX2FREQ_6G(97);
+ CASE_EXT_IDX2FREQ_6G(99);
+ CASE_EXT_IDX2FREQ_6G(101);
+ CASE_EXT_IDX2FREQ_6G(103);
+ CASE_EXT_IDX2FREQ_6G(105);
+ CASE_EXT_IDX2FREQ_6G(107);
+ CASE_EXT_IDX2FREQ_6G(109);
+ CASE_EXT_IDX2FREQ_6G(111);
+ CASE_EXT_IDX2FREQ_6G(113);
+ CASE_EXT_IDX2FREQ_6G(115);
+ CASE_EXT_IDX2FREQ_6G(117);
+ CASE_EXT_IDX2FREQ_6G(119);
+ CASE_EXT_IDX2FREQ_6G(121);
+ CASE_EXT_IDX2FREQ_6G(123);
+ CASE_EXT_IDX2FREQ_6G(125);
+ CASE_EXT_IDX2FREQ_6G(127);
+ CASE_EXT_IDX2FREQ_6G(129);
+ CASE_EXT_IDX2FREQ_6G(131);
+ CASE_EXT_IDX2FREQ_6G(133);
+ CASE_EXT_IDX2FREQ_6G(135);
+ CASE_EXT_IDX2FREQ_6G(137);
+ CASE_EXT_IDX2FREQ_6G(139);
+ CASE_EXT_IDX2FREQ_6G(141);
+ CASE_EXT_IDX2FREQ_6G(143);
+ CASE_EXT_IDX2FREQ_6G(145);
+ CASE_EXT_IDX2FREQ_6G(147);
+ CASE_EXT_IDX2FREQ_6G(149);
+ CASE_EXT_IDX2FREQ_6G(151);
+ CASE_EXT_IDX2FREQ_6G(153);
+ CASE_EXT_IDX2FREQ_6G(155);
+ CASE_EXT_IDX2FREQ_6G(157);
+ CASE_EXT_IDX2FREQ_6G(159);
+ CASE_EXT_IDX2FREQ_6G(161);
+ CASE_EXT_IDX2FREQ_6G(163);
+ CASE_EXT_IDX2FREQ_6G(165);
+ CASE_EXT_IDX2FREQ_6G(167);
+ CASE_EXT_IDX2FREQ_6G(169);
+ CASE_EXT_IDX2FREQ_6G(171);
+ CASE_EXT_IDX2FREQ_6G(173);
+ CASE_EXT_IDX2FREQ_6G(175);
+ CASE_EXT_IDX2FREQ_6G(177);
+ CASE_EXT_IDX2FREQ_6G(179);
+ CASE_EXT_IDX2FREQ_6G(181);
+ CASE_EXT_IDX2FREQ_6G(183);
+ CASE_EXT_IDX2FREQ_6G(185);
+ CASE_EXT_IDX2FREQ_6G(187);
+ CASE_EXT_IDX2FREQ_6G(189);
+ CASE_EXT_IDX2FREQ_6G(191);
+ CASE_EXT_IDX2FREQ_6G(193);
+ CASE_EXT_IDX2FREQ_6G(195);
+ CASE_EXT_IDX2FREQ_6G(197);
+ CASE_EXT_IDX2FREQ_6G(199);
+ CASE_EXT_IDX2FREQ_6G(201);
+ CASE_EXT_IDX2FREQ_6G(203);
+ CASE_EXT_IDX2FREQ_6G(205);
+ CASE_EXT_IDX2FREQ_6G(207);
+ CASE_EXT_IDX2FREQ_6G(209);
+ CASE_EXT_IDX2FREQ_6G(211);
+ CASE_EXT_IDX2FREQ_6G(213);
+ CASE_EXT_IDX2FREQ_6G(215);
+ CASE_EXT_IDX2FREQ_6G(217);
+ CASE_EXT_IDX2FREQ_6G(219);
+ CASE_EXT_IDX2FREQ_6G(221);
+ CASE_EXT_IDX2FREQ_6G(223);
+ CASE_EXT_IDX2FREQ_6G(225);
+ CASE_EXT_IDX2FREQ_6G(227);
+ CASE_EXT_IDX2FREQ_6G(229);
+ CASE_EXT_IDX2FREQ_6G(231);
+ CASE_EXT_IDX2FREQ_6G(233);
+ };
+
+ return INVALID_FREQ;
+}
+
+static u16 cl_channel_idx_to_freq_5g(struct cl_hw *cl_hw, u8 index)
+{
+ switch (index) {
+ CASE_IDX2FREQ_5G(36);
+ CASE_IDX2FREQ_5G(38);
+ CASE_IDX2FREQ_5G(40);
+ CASE_IDX2FREQ_5G(42);
+ CASE_IDX2FREQ_5G(44);
+ CASE_IDX2FREQ_5G(46);
+ CASE_IDX2FREQ_5G(48);
+ CASE_IDX2FREQ_5G(50);
+ CASE_IDX2FREQ_5G(52);
+ CASE_IDX2FREQ_5G(54);
+ CASE_IDX2FREQ_5G(56);
+ CASE_IDX2FREQ_5G(58);
+ CASE_IDX2FREQ_5G(60);
+ CASE_IDX2FREQ_5G(62);
+ CASE_IDX2FREQ_5G(64);
+ CASE_IDX2FREQ_5G(100);
+ CASE_IDX2FREQ_5G(102);
+ CASE_IDX2FREQ_5G(104);
+ CASE_IDX2FREQ_5G(106);
+ CASE_IDX2FREQ_5G(108);
+ CASE_IDX2FREQ_5G(110);
+ CASE_IDX2FREQ_5G(112);
+ CASE_IDX2FREQ_5G(114);
+ CASE_IDX2FREQ_5G(116);
+ CASE_IDX2FREQ_5G(118);
+ CASE_IDX2FREQ_5G(120);
+ CASE_IDX2FREQ_5G(122);
+ CASE_IDX2FREQ_5G(124);
+ CASE_IDX2FREQ_5G(126);
+ CASE_IDX2FREQ_5G(128);
+ CASE_IDX2FREQ_5G(132);
+ CASE_IDX2FREQ_5G(134);
+ CASE_IDX2FREQ_5G(136);
+ CASE_IDX2FREQ_5G(138);
+ CASE_IDX2FREQ_5G(140);
+ CASE_IDX2FREQ_5G(142);
+ CASE_IDX2FREQ_5G(144);
+ CASE_IDX2FREQ_5G(149);
+ CASE_IDX2FREQ_5G(151);
+ CASE_IDX2FREQ_5G(153);
+ CASE_IDX2FREQ_5G(155);
+ CASE_IDX2FREQ_5G(157);
+ CASE_IDX2FREQ_5G(159);
+ CASE_IDX2FREQ_5G(161);
+ CASE_IDX2FREQ_5G(165);
+ };
+
+ return INVALID_FREQ;
+}
+
+static u16 cl_channel_idx_to_freq_24g(struct cl_hw *cl_hw, u8 index)
+{
+ switch (index) {
+ CASE_IDX2FREQ_2G(1);
+ CASE_IDX2FREQ_2G(2);
+ CASE_IDX2FREQ_2G(3);
+ CASE_IDX2FREQ_2G(4);
+ CASE_IDX2FREQ_2G(5);
+ CASE_IDX2FREQ_2G(6);
+ CASE_IDX2FREQ_2G(7);
+ CASE_IDX2FREQ_2G(8);
+ CASE_IDX2FREQ_2G(9);
+ CASE_IDX2FREQ_2G(10);
+ CASE_IDX2FREQ_2G(11);
+ CASE_IDX2FREQ_2G(12);
+ CASE_IDX2FREQ_2G(13);
+ CASE_IDX2FREQ_2G(14);
+ };
+
+ return INVALID_FREQ;
+}
+
+u16 cl_channel_idx_to_freq(struct cl_hw *cl_hw, u8 index)
+{
+ /* Calculate frequency of a given idnex */
+ if (cl_band_is_6g(cl_hw))
+ return cl_channel_bitmap_idx_to_freq_6g(cl_hw, index);
+ else if (cl_band_is_5g(cl_hw))
+ return cl_channel_idx_to_freq_5g(cl_hw, index);
+ else
+ return cl_channel_idx_to_freq_24g(cl_hw, index);
+}
+
+bool cl_channel_is_valid(struct cl_hw *cl_hw, u8 channel)
+{
+ if (cl_band_is_24g(cl_hw)) {
+ return (channel > 0 && channel <= 14);
+ } else if (cl_band_is_5g(cl_hw)) {
+ if (channel >= 36 && channel <= 64)
+ return ((channel & 0x1) == 0x0);
+
+ if (channel >= 100 && channel <= 144)
+ return ((channel & 0x1) == 0x0);
+
+ if (channel >= 149 && channel <= 161)
+ return ((channel & 0x1) == 0x1);
+
+ if (channel == 165)
+ return true;
+ } else {
+ if (channel == 2)
+ return true;
+
+ if (channel >= 1 && channel <= 233)
+ if ((channel & 0x3) == 0x1)
+ return true;
+ }
+
+ return false;
+}
+
+u32 cl_channel_num(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_band_num == 6)
+ return NUM_BITMAP_CHANNELS_6G;
+
+ if (cl_hw->conf->ci_band_num == 5)
+ return NUM_CHANNELS_5G;
+
+ return NUM_CHANNELS_24G;
+}
+
+bool cl_channel_is_dfs(struct cl_hw *cl_hw, u8 channel)
+{
+ if (!cl_band_is_5g(cl_hw))
+ return false;
+
+ return channel >= 36 && channel <= 144;
+}
+
+u32 cl_channel_get_cac_time_ms(struct cl_hw *cl_hw, u8 channel)
+{
+ if (!cl_band_is_5g(cl_hw))
+ return 0;
+
+ if (!cl_channel_is_dfs(cl_hw, channel))
+ return 0;
+
+ /* FIXME: CAC time for weather channels may differ for some regions */
+ if (channel >= 120 && channel <= 128)
+ return IEEE80211_DFS_WEATHER_MIN_CAC_TIME_MS;
+
+ return IEEE80211_DFS_MIN_CAC_TIME_MS;
+}
+
+static void _cl_fill_channel_info(struct cl_hw *cl_hw, u8 bw, u8 ch_idx, u8 channel,
+ u8 country_max_power_q2, u8 max_power_q2,
+ u32 flags, u32 dfs_cac_ms)
+{
+ struct cl_chan_info *chan_info = &cl_hw->channel_info.channels[bw][ch_idx];
+
+ chan_info->channel = channel;
+ chan_info->country_max_power_q2 = country_max_power_q2;
+ chan_info->max_power_q2 = max_power_q2;
+ chan_info->flags = flags;
+ chan_info->dfs_cac_ms = dfs_cac_ms;
+}
+
+static void cl_fill_channel_info(struct cl_hw *cl_hw, u8 bw, u8 ch_idx, u8 channel,
+ u8 country_max_power_q2, u8 max_power_q2)
+{
+ _cl_fill_channel_info(cl_hw, bw, ch_idx, channel, country_max_power_q2, max_power_q2, 0, 0);
+}
+
+static void cl_fill_channel_info_5g(struct cl_hw *cl_hw, u8 bw, u8 ch_idx, u8 channel,
+ u8 country_max_power_q2, u8 max_power_q2)
+{
+ u32 flags = 0;
+ u32 dfs_cac_ms = 0;
+
+ if (cl_hw->conf->ci_ieee80211h && cl_channel_is_dfs(cl_hw, channel)) {
+ flags |= IEEE80211_CHAN_RADAR;
+ dfs_cac_ms = cl_channel_get_cac_time_ms(cl_hw, channel);
+ }
+
+ _cl_fill_channel_info(cl_hw, bw, ch_idx, channel, country_max_power_q2,
+ max_power_q2, flags, dfs_cac_ms);
+}
+
+static inline s32 cl_convert_str_int_q2(s8 *str)
+{
+ s32 x, y;
+
+ if (!str)
+ return 0;
+ if (sscanf(str, "%d.%2d", &x, &y) != 2)
+ return 0;
+ if (!strstr(str, "."))
+ return x * 4;
+ if (y < 10 && (*(strstr(str, ".") + 1) != '0'))
+ y *= 10;
+ return (x * 100 + y) * 4 / 100;
+}
+
+static int cl_parse_reg_domain(struct cl_hw *cl_hw, char **str)
+{
+ /* Check if current line contains "FCC" or "ETSI" */
+ char *token = strsep(str, "\n");
+
+ if (!token)
+ goto err;
+
+ if (strstr(token, "FCC")) {
+ cl_hw->channel_info.standard = NL80211_DFS_FCC;
+ cl_dbg_info(cl_hw, "Standard = FCC\n");
+ return 0;
+ }
+
+ if (strstr(token, "ETSI")) {
+ cl_hw->channel_info.standard = NL80211_DFS_ETSI;
+ cl_dbg_info(cl_hw, "Standard = ETSI\n");
+ return 0;
+ }
+
+err:
+ cl_dbg_err(cl_hw, "Illegal regulatory domain\n");
+ cl_hw->channel_info.standard = NL80211_DFS_UNSET;
+ return -1;
+}
+
+#define MAX_CC_STR 4
+#define MAX_BW_STR 8
+
+static bool cl_parse_channel_info_txt(struct cl_hw *cl_hw)
+{
+ /*
+ * Example of country information in channel_info.txt:
+ *
+ * [EU (European Union)ETSI]
+ * 2.4GHz/20MHz: 2412(1,20) 2417(2,20) 2422(3,20) 2427(4,20) 2432(5,20) 2437(6,20)
+ * 2442(7,20) 2447(8,20) 2452(9,20) 2457(10,20) 2462(11,20) 2467(12,20)
+ * 2472(13,20)
+ * 2.4GHz/40MHz: 2422(1,20) 2427(2,20) 2432(3,20) 2437(4,20) 2442(5,20) 2447(6,20)
+ * 2452(7,20) 2457(8,20) 2462(9,20) 2467(10,20) 2472(11,20)
+ * 5.2GHz/20MHz: 5180(36,23) 5200(40,23) 5220(44,23) 5240(48,23) 5260(52,23) 5280(56,23)
+ * 5300(60,23) 5320(64,23) 5500(100,30) 5520(104,30) 5540(108,30)
+ * 5560(112,30)5580(116,30) 5600(120,30) 5620(124,30) 5640(128,30)
+ * 5660(132,30) 5680(136,30) 5700(140,30)
+ * 5.2GHz/40MHz: 5180(36,23) 5200(40,23) 5220(44,23) 5240(48,23) 5260(52,23) 5280(56,23)
+ * 5300(60,23) 5310(64,23) 5510(100,30) 5510(104,30) 5550(108,30)
+ * 5550(112,30) 5590(116,30) 5590(120,30) 5630(124,30) 5630(128,30)
+ * 5670(132,30) 5670(136,30)
+ * 5.2GHz/80MHz: 5180(36,23) 5200(40,23) 5220(44,23) 5240(48,23) 5260(52,23) 5280(56,23)
+ * 5300(60,23) 5310(64,23) 5510(100,30) 5510(104,30) 5550(108,30)
+ * 5550(112,30) 5590(116,30) 5590(120,30) 5630(124,30) 5630(128,30)
+ * 5.2GHz/160MHz: 5180(36,23) 5200(40,23) 5220(44,23) 5240(48,23) 5260(52,23) 5280(56,23)
+ * 5300(60,23) 5310(64,23) 5510(100,30) 5510(104,30) 5550(108,30)
+ * 5550(112,30) 5590(116,30) 5590(120,30) 5630(124,30) 5630(128,30)
+ */
+
+ char *buf = NULL, *ptr = NULL;
+ char cc_str[MAX_CC_STR] = {0};
+ char bw_str[MAX_BW_STR] = {0};
+ size_t size;
+ u8 bw, bw_mhz, bw_max, max_power, channel, i;
+ char file_name[CL_FILENAME_MAX] = {0};
+
+ snprintf(file_name, sizeof(file_name), "channel_info_chip%d.txt", cl_hw->chip->idx);
+
+ /* Read channel_info.txt into buf */
+ size = cl_file_open_and_read(cl_hw->chip, file_name, &buf);
+
+ if (!buf)
+ return false;
+
+ /* Jump to the correct country in the file */
+ snprintf(cc_str, sizeof(cc_str), "[%s", cl_hw->chip->conf->ci_country_code);
+ ptr = strstr(buf, cc_str);
+ if (!ptr)
+ goto out;
+
+ if (cl_parse_reg_domain(cl_hw, &ptr))
+ goto out;
+
+ /* Jump to the relevant band */
+ if (cl_band_is_24g(cl_hw)) {
+ bw_max = CHNL_BW_40;
+ ptr = strstr(ptr, "2.4GHz");
+ } else if (cl_band_is_5g(cl_hw)) {
+ ptr = strstr(ptr, "5.2GHz");
+ bw_max = CHNL_BW_160;
+ } else {
+ ptr = strstr(ptr, "6GHz");
+ bw_max = CHNL_BW_160;
+ }
+
+ for (bw = 0; bw <= bw_max; bw++) {
+ if (!ptr)
+ goto out;
+
+ i = 0;
+
+ /* Jump to relevant bandwidth */
+ bw_mhz = BW_TO_MHZ(bw);
+ snprintf(bw_str, sizeof(bw_str), "%uMHz:", bw_mhz);
+ ptr = strstr(ptr, bw_str);
+
+ /* Iterate until end of line and parse (channel, max_power) */
+ while (ptr && (ptr + 1) && (*(ptr + 1) != '\n')) {
+ u32 flags = 0, dfs_cac_ms = 0;
+
+ ptr = strstr(ptr, "(");
+ if (!ptr)
+ goto out;
+
+ if (sscanf(ptr, "(%hhu,%hhu)", &channel, &max_power) != 2)
+ goto out;
+
+ if (!cl_channel_is_valid(cl_hw, channel) ||
+ i == cl_channel_num(cl_hw))
+ goto out;
+
+ if (cl_hw->conf->ci_ieee80211h && cl_channel_is_dfs(cl_hw, channel)) {
+ flags |= IEEE80211_CHAN_RADAR;
+ dfs_cac_ms = cl_channel_get_cac_time_ms(cl_hw, channel);
+ }
+
+ _cl_fill_channel_info(cl_hw, bw, i, channel, max_power << 2,
+ max_power << 2, flags, dfs_cac_ms);
+
+ i++;
+ ptr = strstr(ptr, ")");
+ }
+ }
+
+ kfree(buf);
+ return true;
+
+out:
+ kfree(buf);
+ return false;
+}
+
+static bool cl_is_parsing_success(struct cl_hw *cl_hw)
+{
+ /* Check that there is at least one channel in any bw */
+ u8 bw;
+ u8 max_bw = BAND_IS_5G_6G(cl_hw) ? CHNL_BW_160 : CHNL_BW_40;
+
+ for (bw = 0; bw <= max_bw; bw++)
+ if (!cl_hw->channel_info.channels[bw][0].channel)
+ return false;
+
+ return true;
+}
+
+static void cl_chan_info_set_max_bw_6g(struct cl_hw *cl_hw)
+{
+ u8 i, bw, bw_cnt, channel, channel_gap;
+ struct cl_chan_info *chan_info;
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++) {
+ chan_info = cl_hw->channel_info.channels[bw];
+ bw_cnt = 0;
+
+ for (i = 0; i < NUM_BITMAP_CHANNELS_6G; i++) {
+ channel = chan_info[i].channel;
+
+ if (channel == 0)
+ break;
+
+ channel_gap = channel - START_CHAN_IDX_6G;
+
+ /*
+ * Verify that we don't combine together channels
+ * from different 80MHz sections
+ */
+ if ((channel_gap % CL_160MHZ_CH_GAP) == 0)
+ bw_cnt = 0;
+
+ if (i > 0)
+ bw_cnt++;
+
+ /* Verify that we don't make illegal 80MHz combination */
+ if ((channel_gap % CL_80MHZ_CH_GAP == 0) && bw_cnt == 3)
+ bw_cnt = 0;
+
+ /* Verify that we don't make illegal 40MHz combination */
+ if ((channel_gap % CL_40MHZ_CH_GAP == 0) && bw_cnt == 1)
+ bw_cnt = 0;
+
+ if ((((bw_cnt + 1) % CL_160MHZ_HOP) == 0) && bw == CHNL_BW_160) {
+ chan_info[i].max_bw = CHNL_BW_160;
+ chan_info[i - 1].max_bw = CHNL_BW_160;
+ chan_info[i - 2].max_bw = CHNL_BW_160;
+ chan_info[i - 3].max_bw = CHNL_BW_160;
+ chan_info[i - 4].max_bw = CHNL_BW_160;
+ chan_info[i - 5].max_bw = CHNL_BW_160;
+ chan_info[i - 6].max_bw = CHNL_BW_160;
+ chan_info[i - 7].max_bw = CHNL_BW_160;
+ } else if ((((bw_cnt + 1) % CL_80MHZ_HOP) == 0) && (bw == CHNL_BW_80)) {
+ chan_info[i].max_bw = CHNL_BW_80;
+ chan_info[i - 1].max_bw = CHNL_BW_80;
+ chan_info[i - 2].max_bw = CHNL_BW_80;
+ chan_info[i - 3].max_bw = CHNL_BW_80;
+ } else if ((((bw_cnt + 1) % CL_40MHZ_HOP) == 0) && (bw >= CHNL_BW_40)) {
+ chan_info[i].max_bw = CHNL_BW_40;
+ chan_info[i - 1].max_bw = CHNL_BW_40;
+ } else {
+ chan_info[i].max_bw = CHNL_BW_20;
+ }
+ }
+ }
+}
+
+static void cl_chan_info_set_max_bw_5g(struct cl_hw *cl_hw)
+{
+ u8 i, bw, bw_cnt, channel, channel_gap;
+ struct cl_chan_info *chan_info;
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++) {
+ chan_info = cl_hw->channel_info.channels[bw];
+ bw_cnt = 0;
+
+ for (i = 0; i < NUM_CHANNELS_5G; i++) {
+ channel = chan_info[i].channel;
+
+ if (channel == 0)
+ break;
+
+ if (channel < 149)
+ channel_gap = channel - 36;
+ else
+ channel_gap = channel - 149;
+
+ /*
+ * Verify that we don't combine together channels from
+ * different 80MHz sections
+ * (i.e. 36-48 can be combined into 80MHz channels, unlike 40-52)
+ */
+ if ((channel_gap % CL_160MHZ_CH_GAP) == 0)
+ bw_cnt = 0;
+
+ /* Check if 20MHz channels can be combined into 40MHz or 80MHz channels */
+ if (i > 0) {
+ /*
+ * Verify that we don't combine together unsecutive channels
+ * (like 36 and 44 when 40 is missing)
+ */
+ if ((chan_info[i].channel - chan_info[i - 1].channel) >
+ CL_20MHZ_CH_GAP)
+ bw_cnt = 0;
+ else
+ bw_cnt++;
+ }
+
+ /* Verify that we don't make illegal 80MHz combination (like 44-56) */
+ if ((channel_gap % CL_80MHZ_CH_GAP == 0) && bw_cnt == 3)
+ bw_cnt = 0;
+
+ /* Verify that we don't make illegal 40MHz combination (like 40-44) */
+ if ((channel_gap % CL_40MHZ_CH_GAP == 0) && bw_cnt == 1)
+ bw_cnt = 0;
+
+ if ((((bw_cnt + 1) % CL_160MHZ_HOP) == 0) && bw == CHNL_BW_160) {
+ chan_info[i].max_bw = CHNL_BW_160;
+ chan_info[i - 1].max_bw = CHNL_BW_160;
+ chan_info[i - 2].max_bw = CHNL_BW_160;
+ chan_info[i - 3].max_bw = CHNL_BW_160;
+ chan_info[i - 4].max_bw = CHNL_BW_160;
+ chan_info[i - 5].max_bw = CHNL_BW_160;
+ chan_info[i - 6].max_bw = CHNL_BW_160;
+ chan_info[i - 7].max_bw = CHNL_BW_160;
+ } else if ((((bw_cnt + 1) % CL_80MHZ_HOP) == 0) && bw == CHNL_BW_80) {
+ chan_info[i].max_bw = CHNL_BW_80;
+ chan_info[i - 1].max_bw = CHNL_BW_80;
+ chan_info[i - 2].max_bw = CHNL_BW_80;
+ chan_info[i - 3].max_bw = CHNL_BW_80;
+ } else if ((((bw_cnt + 1) % CL_40MHZ_HOP) == 0) && bw >= CHNL_BW_40) {
+ chan_info[i].max_bw = CHNL_BW_40;
+ chan_info[i - 1].max_bw = CHNL_BW_40;
+ } else {
+ chan_info[i].max_bw = CHNL_BW_20;
+ }
+ }
+ }
+}
+
+static void cl_chan_info_set_max_bw_24g(struct cl_hw *cl_hw)
+{
+ u8 i, bw, channel;
+ struct cl_chan_info *chan_info;
+
+ for (bw = 0; bw < CHNL_BW_80; bw++) {
+ chan_info = cl_hw->channel_info.channels[bw];
+
+ for (i = 0; i < NUM_CHANNELS_24G; i++) {
+ channel = chan_info[i].channel;
+
+ if (channel == 0)
+ break;
+
+ if (channel < 14)
+ chan_info[i].max_bw = CHNL_BW_40;
+ else
+ chan_info[i].max_bw = CHNL_BW_20;
+ }
+ }
+}
+
+static void cl_chan_info_set_max_bw(struct cl_hw *cl_hw)
+{
+ if (cl_band_is_6g(cl_hw))
+ cl_chan_info_set_max_bw_6g(cl_hw);
+ else if (cl_band_is_5g(cl_hw))
+ cl_chan_info_set_max_bw_5g(cl_hw);
+ else
+ cl_chan_info_set_max_bw_24g(cl_hw);
+}
+
+static void cl_chan_info_dbg(struct cl_hw *cl_hw)
+{
+ struct cl_chan_info *chan_info;
+ u32 max_power_integer, max_power_fraction;
+ u8 i, j;
+
+ for (i = 0; i < CHNL_BW_MAX; i++) {
+ cl_dbg_info(cl_hw, "Bandwidth = %uMHz\n", BW_TO_MHZ(i));
+ for (j = 0; j < cl_channel_num(cl_hw); j++) {
+ chan_info = &cl_hw->channel_info.channels[i][j];
+
+ if (chan_info->channel == 0)
+ continue;
+
+ max_power_integer = (chan_info->max_power_q2 / 4);
+ max_power_fraction =
+ (100 * (chan_info->max_power_q2 - 4 * max_power_integer) / 4);
+
+ cl_dbg_info(cl_hw, "Channel = %u, max EIRP = %3u.%02u, bw = %uMHz\n",
+ chan_info->channel, max_power_integer,
+ max_power_fraction, BW_TO_MHZ(chan_info->max_bw));
+ }
+ }
+}
+
+/* Band 6G - default power */
+#define UNII_5_POW_Q2 (27 << 2)
+#define UNII_6_POW_Q2 (27 << 2)
+#define UNII_7_POW_Q2 (27 << 2)
+#define UNII_8_POW_Q2 (27 << 2)
+
+/* Band 5G - default power */
+/* Default regulatory domain:
+ * country US: DFS-FCC
+ * (2400 - 2472 @ 40), (N/A, 30), (N/A)
+ * (5150 - 5250 @ 80), (N/A, 23), (N/A), AUTO-BW
+ * (5250 - 5350 @ 80), (N/A, 23), (0 ms), DFS, AUTO-BW
+ * (5470 - 5730 @ 160), (N/A, 23), (0 ms), DFS
+ * (5730 - 5850 @ 80), (N/A, 30), (N/A), AUTO-BW
+ * (5850 - 5895 @ 40), (N/A, 27), (N/A), NO-OUTDOOR, AUTO-BW, PASSIVE-SCAN
+ * (57240 - 71000 @ 2160), (N/A, 40), (N/A)
+ */
+#define UNII_1_POW_Q2 (23 << 2)
+#define UNII_2_POW_Q2 (23 << 2)
+#define UNII_2_EXT_POW_Q2 (23 << 2)
+#define UNII_3_POW_Q2 (30 << 2)
+
+/* Band 2.4G - default power */
+#define BAND_24G_POW_Q2 (30 << 2)
+
+static void cl_set_default_channel_info_6g(struct cl_hw *cl_hw)
+{
+ u8 i, j, k;
+
+ for (i = 0; i < CHNL_BW_MAX; i++) {
+ k = 0;
+
+ /* Ch2 is a special case */
+ cl_fill_channel_info(cl_hw, i, k, 2, UNII_5_POW_Q2, UNII_5_POW_Q2);
+ k++;
+
+ for (j = START_CHAN_IDX_UNII5; j <= END_CHAN_IDX_UNII5; j += 4) {
+ cl_fill_channel_info(cl_hw, i, k, j, UNII_5_POW_Q2, UNII_5_POW_Q2);
+ k++;
+ }
+
+ for (j = START_CHAN_IDX_UNII6; j <= END_CHAN_IDX_UNII6; j += 4) {
+ cl_fill_channel_info(cl_hw, i, k, j, UNII_6_POW_Q2, UNII_6_POW_Q2);
+ k++;
+ }
+
+ for (j = START_CHAN_IDX_UNII7; j <= END_CHAN_IDX_UNII7; j += 4) {
+ cl_fill_channel_info(cl_hw, i, k, j, UNII_7_POW_Q2, UNII_7_POW_Q2);
+ k++;
+ }
+
+ for (j = START_CHAN_IDX_UNII8; j <= END_CHAN_IDX_UNII8; j += 4) {
+ /* Channel 233 is valid only in 20MHz */
+ if (i != CHNL_BW_20 && j == END_CHAN_IDX_UNII8)
+ break;
+
+ cl_fill_channel_info(cl_hw, i, k, j, UNII_8_POW_Q2, UNII_8_POW_Q2);
+ k++;
+ }
+ }
+}
+
+static void cl_set_default_channel_info_5g(struct cl_hw *cl_hw)
+{
+ u8 i, j, k;
+
+ for (i = 0; i < CHNL_BW_MAX; i++) {
+ k = 0;
+
+ for (j = 36; j <= 48; j += 4) {
+ cl_fill_channel_info_5g(cl_hw, i, k, j, UNII_1_POW_Q2, UNII_1_POW_Q2);
+ k++;
+ }
+
+ for (j = 52; j <= 64; j += 4) {
+ cl_fill_channel_info_5g(cl_hw, i, k, j, UNII_2_POW_Q2, UNII_2_POW_Q2);
+ k++;
+ }
+
+ for (j = 100; j <= 144; j += 4) {
+ /* 160MHz is supported only in channel 36 - 64 and 100 - 128 */
+ if (i == CHNL_BW_160 && j == 132)
+ return;
+
+ cl_fill_channel_info_5g(cl_hw, i, k, j,
+ UNII_2_EXT_POW_Q2, UNII_2_EXT_POW_Q2);
+ k++;
+ }
+
+ for (j = 149; j <= 165; j += 4) {
+ /* Channel 165 is valid only in 20MHz */
+ if (i != CHNL_BW_20 && j == 165)
+ break;
+
+ cl_fill_channel_info_5g(cl_hw, i, k, j, UNII_3_POW_Q2, UNII_3_POW_Q2);
+ k++;
+ }
+ }
+}
+
+static void cl_set_default_channel_info_24g(struct cl_hw *cl_hw)
+{
+ u8 i, j;
+
+ for (i = 0; i <= CHNL_BW_40; i++)
+ for (j = 0; j < 13; j++)
+ cl_fill_channel_info(cl_hw, i, j, j + 1, BAND_24G_POW_Q2, BAND_24G_POW_Q2);
+}
+
+static void cl_set_default_channel_info(struct cl_hw *cl_hw)
+{
+ struct cl_channel_info *channel_info = &cl_hw->channel_info;
+
+ memset(channel_info->channels, 0, sizeof(channel_info->channels));
+
+ channel_info->standard = NL80211_DFS_FCC;
+
+ if (cl_band_is_6g(cl_hw))
+ cl_set_default_channel_info_6g(cl_hw);
+ else if (cl_band_is_5g(cl_hw))
+ cl_set_default_channel_info_5g(cl_hw);
+ else
+ cl_set_default_channel_info_24g(cl_hw);
+}
+
+/*
+ * cl_hardware_power_table_update: Applies individual regulatory table entry
+ * Inputs: cl_hw - pointer to cl_hw
+ * bw_mhz - current bandwidth in MHz
+ * chan_start - match channels greater or equal to chan_start
+ * chan_end - match channels less than chan_end
+ * reg_pwr - ensure channel_info.channels[bw][ch_idx].max_power does not exceed this
+ * Output: updated channel_info.channels[bw][ch_idx].max_power
+ * and channel_info.channels[bw][ch_idx].max_total_power
+ * on all channels that match specified range
+ */
+static void cl_hardware_power_table_update(struct cl_hw *cl_hw, u8 bw_mhz,
+ u8 chan_start, u8 chan_end, u8 pwr_q2)
+{
+ struct cl_chan_info *chan_info = NULL;
+ u8 bw = 0;
+ u8 ch_idx = 0;
+ bool ch_found = false;
+ bool is_24g = cl_band_is_24g(cl_hw);
+
+ if (bw_mhz == 20 || bw_mhz == 40 || (!is_24g && (bw_mhz == 80 || bw_mhz == 160))) {
+ bw = MHZ_TO_BW(bw_mhz);
+ } else {
+ cl_dbg_err(cl_hw, "Invalid bw %u\n", bw_mhz);
+ return;
+ }
+
+ /* Iterate through all cl_channels[bw][ch_idx] - to find all matches */
+ for (ch_idx = 0; ch_idx < cl_channel_num(cl_hw); ch_idx++) {
+ chan_info = &cl_hw->channel_info.channels[bw][ch_idx];
+
+ if (chan_start <= chan_info->channel && chan_info->channel < chan_end) {
+ ch_found = true;
+
+ /*
+ * Max-Power =
+ * minimum beteen hardware_power_table and country code definition
+ */
+ chan_info->max_power_q2 = min(pwr_q2, chan_info->max_power_q2);
+ chan_info->hardware_max_power_q2 = pwr_q2;
+ }
+ }
+
+ if (!ch_found)
+ cl_dbg_info(cl_hw, "Skipping invalid channel range: %u - %u\n",
+ chan_start, chan_end);
+}
+
+/*
+ * cl_hardware_power_table_parse():
+ * Iterate through hardware power table entries and apply each one.
+ * Expected format:
+ * bw1(chan1=reg_pwr1;chan2=reg_pwr2;...)#bw2(chan3=reg_pwr3;chan4=reg_pwr4;...) ...
+ * Example:
+ * 20(36=22.0;40=22.75;149=21.75)#40(36=22.5;40=23.0;149=21.75)#80(36=21.75;40=21.5;149=22.25)
+ */
+static void cl_hardware_power_table_parse(struct cl_hw *cl_hw)
+{
+ char *table_str = NULL;
+ char *table_str_p = NULL;
+ char *channel_str = NULL;
+ char *channel_str_p = NULL;
+ char *bw_set = NULL;
+ char *out_tok = NULL;
+ s8 in_reg_pwr[16] = {0};
+ u8 bw_mhz = 0;
+ u8 chan_start = 0;
+ u8 chan_end = 0;
+ u8 curr_pwr_q2 = 0;
+ u8 next_pwr_q2 = 0;
+
+ if (strlen(cl_hw->conf->ce_hardware_power_table) == 0)
+ return; /* Not configured */
+
+ table_str_p = kzalloc(CL_MAX_STR_BUFFER_SIZE / 2, GFP_KERNEL);
+ if (!table_str_p)
+ return;
+
+ channel_str_p = kzalloc(CL_MAX_STR_BUFFER_SIZE / 2, GFP_KERNEL);
+ if (!channel_str_p) {
+ kfree(table_str_p);
+ cl_dbg_err(cl_hw, "Failed to allocate channel_str\n");
+ return;
+ }
+
+ table_str = table_str_p;
+
+ strncpy(table_str,
+ cl_hw->conf->ce_hardware_power_table,
+ (CL_MAX_STR_BUFFER_SIZE / 2) - 1);
+
+ /* Iterate through all bandwidth sets included in table_str */
+ bw_set = strsep(&table_str, "#");
+ while (bw_set) {
+ channel_str = channel_str_p;
+ if (sscanf(bw_set, "%hhu(%s)", &bw_mhz, channel_str) != 2) {
+ bw_set = strsep(&table_str, "#");
+ continue;
+ }
+
+ /* Iterate through each channel in this bandwidth set */
+ chan_start = 0;
+ chan_end = 0;
+ curr_pwr_q2 = 0;
+ next_pwr_q2 = 0;
+ out_tok = strsep(&channel_str, ";");
+
+ while (out_tok) {
+ if (sscanf(out_tok, "%hhu=%s", &chan_end, in_reg_pwr) == 2) {
+ next_pwr_q2 = cl_convert_str_int_q2(in_reg_pwr);
+
+ /* Apply regulatory table rule. Skip initial case */
+ if (curr_pwr_q2 != 0 && chan_start != 0)
+ cl_hardware_power_table_update(cl_hw, bw_mhz, chan_start,
+ chan_end, curr_pwr_q2);
+
+ /* Prepare next iteration */
+ chan_start = chan_end;
+ curr_pwr_q2 = next_pwr_q2;
+ }
+ out_tok = strsep(&channel_str, ";");
+ }
+
+ /* Handle last channel case */
+ if (next_pwr_q2 != 0 && chan_start != 0) {
+ u8 chan_end;
+
+ if (cl_band_is_6g(cl_hw))
+ chan_end = 234;
+ else if (cl_band_is_5g(cl_hw))
+ chan_end = 166;
+ else
+ chan_end = 15;
+
+ cl_hardware_power_table_update(cl_hw, bw_mhz, chan_start,
+ chan_end, curr_pwr_q2);
+ }
+
+ bw_set = strsep(&table_str, "#");
+ }
+
+ kfree(table_str_p);
+ kfree(channel_str_p);
+}
+
+static void cl_chan_info_ieee80211_update_max_power(struct cl_hw *cl_hw)
+{
+ struct ieee80211_supported_band *sband = &cl_hw->sband;
+ struct ieee80211_channel *chan = NULL;
+ int i = 0, channel;
+
+ for (i = 0; i < sband->n_channels; i++) {
+ chan = &sband->channels[i];
+ channel = ieee80211_frequency_to_channel(chan->center_freq);
+ chan->max_power = cl_chan_info_get_max_power(cl_hw, channel);
+ }
+}
+
+void cl_chan_info_init(struct cl_hw *cl_hw)
+{
+ struct cl_channel_info *channel_info = &cl_hw->channel_info;
+
+ channel_info->use_channel_info = true;
+
+ if (channel_info->use_channel_info) {
+ if (!cl_parse_channel_info_txt(cl_hw) || !cl_is_parsing_success(cl_hw)) {
+ CL_DBG_WARNING(cl_hw, "Error parsing channel_info.txt. Using default!\n");
+ cl_set_default_channel_info(cl_hw);
+ }
+
+ cl_chan_info_ieee80211_update_max_power(cl_hw);
+ cl_chan_info_set_max_bw(cl_hw);
+ cl_chan_info_dbg(cl_hw);
+ } else {
+ cl_set_default_channel_info(cl_hw);
+ }
+
+ cl_hardware_power_table_parse(cl_hw);
+}
+
+void cl_chan_info_deinit(struct cl_hw *cl_hw)
+{
+ if (cl_hw->channel_info.rd &&
+ cl_hw->channel_info.use_channel_info)
+ kfree(cl_hw->channel_info.rd);
+}
+
+struct cl_chan_info *cl_chan_info_get(struct cl_hw *cl_hw, u8 channel, u8 bw)
+{
+ int i = 0;
+ struct cl_chan_info *chan_info;
+
+ for (i = 0; i < cl_channel_num(cl_hw); i++) {
+ chan_info = &cl_hw->channel_info.channels[bw][i];
+
+ if (chan_info->channel == channel)
+ return chan_info;
+ }
+
+ return NULL;
+}
+
+u8 cl_chan_info_get_max_bw(struct cl_hw *cl_hw, u8 channel)
+{
+ s8 bw = 0;
+
+ for (bw = CHNL_BW_160; bw > CHNL_BW_20; bw--)
+ if (cl_chan_info_get(cl_hw, channel, bw))
+ return (u8)bw;
+
+ return CHNL_BW_20;
+}
+
+s16 cl_chan_info_get_eirp_limit_q8(struct cl_hw *cl_hw, u8 bw)
+{
+ /* Eirp_limit = min(country_limit, hw_limit) */
+ struct cl_chan_info *chan_info = cl_chan_info_get(cl_hw, cl_hw->channel, bw);
+
+ return chan_info ? (chan_info->max_power_q2 << 6) : CL_DEFAULT_CHANNEL_POWER_Q8;
+}
+
+u8 cl_chan_info_get_max_power(struct cl_hw *cl_hw, u8 channel)
+{
+ struct cl_chan_info *chan_info;
+ u8 bw = 0;
+ u8 max_power_q2 = 0;
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++) {
+ chan_info = cl_chan_info_get(cl_hw, channel, bw);
+
+ if (!chan_info)
+ continue;
+
+ if (chan_info->max_power_q2 > max_power_q2)
+ max_power_q2 = chan_info->max_power_q2;
+ }
+
+ return max_power_q2 >> 2;
+}
+
+static void cl_chan_update_channel_info(struct cl_hw *cl_hw, struct ieee80211_channel *channel)
+{
+ u8 bw;
+ u32 chan = ieee80211_frequency_to_channel(channel->center_freq);
+ struct cl_chan_info *chan_info;
+
+ for (bw = CHNL_BW_20; bw < CHNL_BW_MAX; bw++) {
+ chan_info = cl_chan_info_get(cl_hw, chan, bw);
+ if (!chan_info || chan_info->channel == 0)
+ continue;
+
+ chan_info->max_power_q2 = channel->max_power << 2;
+ chan_info->country_max_power_q2 = channel->max_reg_power << 2;
+ chan_info->flags = channel->flags;
+ chan_info->dfs_cac_ms = channel->dfs_cac_ms;
+ }
+}
+
+void cl_chan_update_channels_info(struct cl_hw *cl_hw,
+ const struct ieee80211_supported_band *cfg_band)
+{
+ int i = 0;
+
+ spin_lock_bh(&cl_hw->channel_info_lock);
+ for (i = 0; i < cfg_band->n_channels; ++i)
+ cl_chan_update_channel_info(cl_hw, &cfg_band->channels[i]);
+ spin_unlock_bh(&cl_hw->channel_info_lock);
+}
+
+#define CENTER_FREQ_24G_BW_80MHZ 2442
+#define CENTER_FREQ_24G_BW_160MHZ 2482
+
+static int cl_chandef_calc_6g(struct cl_hw *cl_hw, u16 freq, u32 bw, u32 offset,
+ u32 *primary, u32 *center)
+{
+ u32 bw_mhz = BW_TO_MHZ(bw);
+ u32 min_freq = 0;
+
+ if (freq == FREQ6G(2)) {
+ min_freq = FREQ6G(2);
+ } else if (freq >= FREQ6G(1) && freq <= FREQ6G(233)) {
+ min_freq = FREQ6G(1);
+ } else {
+ cl_dbg_err(cl_hw, "Invalid frequecy - %u\n", freq);
+ return -EINVAL;
+ }
+
+ *primary = freq - (freq - min_freq) % 20;
+ *center = *primary - (*primary - min_freq) % bw_mhz + offset;
+
+ return 0;
+}
+
+static int cl_chandef_calc_5g(struct cl_hw *cl_hw, u16 freq, u32 bw, u32 offset,
+ u32 *primary, u32 *center)
+{
+ u32 bw_mhz = BW_TO_MHZ(bw);
+ u32 min_freq = 0;
+
+ if ((freq >= FREQ5G(36) && freq <= FREQ5G(64)) ||
+ (freq >= FREQ5G(100) && freq <= FREQ5G(144))) {
+ min_freq = FREQ5G(36);
+ } else if (freq >= FREQ5G(149) && freq <= FREQ5G(165)) {
+ min_freq = FREQ5G(149);
+ } else {
+ cl_dbg_err(cl_hw, "Invalid frequecy - %u\n", freq);
+ return -EINVAL;
+ }
+
+ *primary = freq - (freq - min_freq) % 20;
+ *center = *primary - (*primary - min_freq) % bw_mhz + offset;
+
+ return 0;
+}
+
+static int cl_chandef_calc_24g(struct cl_hw *cl_hw, u16 freq, u32 bw, u32 offset,
+ u32 *primary, u32 *center)
+{
+ u32 min_freq = 0;
+
+ if (freq < FREQ2G(1) || freq > FREQ2G(14)) {
+ cl_dbg_err(cl_hw, "Invalid frequecy - %u\n", freq);
+ return -EINVAL;
+ }
+
+ min_freq = freq < FREQ2G(14) ? FREQ2G(1) : FREQ2G(14);
+ *primary = freq - (freq - min_freq) % 5;
+
+ if (bw == CHNL_BW_20) {
+ *center = *primary;
+ } else if (bw == CHNL_BW_40) {
+ if (freq <= FREQ2G(4)) {
+ /* Above extension channel */
+ *center = *primary + offset;
+ } else if (freq >= FREQ2G(10)) {
+ /* Below extension channel */
+ *center = *primary - offset;
+ } else {
+ /* Channels 8-9 must be below if channel 13 is not supported */
+ if (freq >= FREQ2G(8) && !cl_chan_info_get(cl_hw, 13, CHNL_BW_20) &&
+ /* For Calibration, when using 2.4GHz channels on TCV0 to set SX0. */
+ !cl_chan_info_get(cl_hw->chip->cl_hw_tcv1, 13, CHNL_BW_20)) {
+ *center = *primary - offset;
+ } else {
+ /**
+ * Set below/above according to the current hapd configuration.
+ * If undefined, preffer above offset.
+ */
+ if (cl_hw->ht40_preffered_ch_type == NL80211_CHAN_HT40MINUS)
+ *center = *primary - offset;
+ else
+ *center = *primary + offset;
+ }
+ }
+ } else if (bw == CHNL_BW_80) {
+ *center = CENTER_FREQ_24G_BW_80MHZ;
+ } else {
+ /* 160MHz */
+ *center = CENTER_FREQ_24G_BW_160MHZ;
+ }
+
+ return 0;
+}
+
+int cl_chandef_calc(struct cl_hw *cl_hw, u32 channel, u32 bw,
+ enum nl80211_chan_width *width, u32 *primary, u32 *center)
+{
+ u16 freq = ieee80211_channel_to_frequency(channel, cl_hw->nl_band);
+ u32 offset = 0;
+ int ret = 0;
+
+ switch (bw) {
+ case CHNL_BW_20:
+ offset = 0;
+ if (channel == 14)
+ *width = NL80211_CHAN_WIDTH_20_NOHT;
+ else
+ *width = NL80211_CHAN_WIDTH_20;
+ break;
+ case CHNL_BW_40:
+ offset = 10;
+ *width = NL80211_CHAN_WIDTH_40;
+ break;
+ case CHNL_BW_80:
+ if (!cl_hw->chip->conf->ce_production_mode && cl_band_is_24g(cl_hw)) {
+ cl_dbg_err(cl_hw, "Invalid bandwidth - %u\n", bw);
+ return -EINVAL;
+ }
+
+ offset = 30;
+ *width = NL80211_CHAN_WIDTH_80;
+ break;
+ case CHNL_BW_160:
+ /* Verify 2.4G bandwidth validity only in operational mode */
+ if (!cl_hw->chip->conf->ce_production_mode && cl_band_is_24g(cl_hw)) {
+ cl_dbg_err(cl_hw, "Invalid bandwidth - %u\n", bw);
+ return -EINVAL;
+ }
+
+ offset = 70;
+ *width = NL80211_CHAN_WIDTH_160;
+ break;
+ default:
+ cl_dbg_err(cl_hw, "Invalid bandwidth - %u\n", bw);
+ return -EINVAL;
+ }
+
+ if (cl_band_is_6g(cl_hw))
+ ret = cl_chandef_calc_6g(cl_hw, freq, bw, offset, primary, center);
+ else if (cl_band_is_5g(cl_hw))
+ ret = cl_chandef_calc_5g(cl_hw, freq, bw, offset, primary, center);
+ else
+ ret = cl_chandef_calc_24g(cl_hw, freq, bw, offset, primary, center);
+
+ cl_dbg_trace(cl_hw, "primary=%u center=%u\n", *primary, *center);
+
+ return ret;
+}
+
+int cl_chandef_get_default(struct cl_hw *cl_hw, u32 *channel, u8 *bw,
+ enum nl80211_chan_width *width,
+ u32 *primary, u32 *center)
+{
+ *bw = cl_hw->conf->ci_chandef_bandwidth;
+ *channel = cl_hw->conf->ci_chandef_channel;
+
+ return cl_chandef_calc(cl_hw, *channel, *bw, width, primary, center);
+}
+
+int cl_init_channel_stats(struct cl_hw *cl_hw,
+ struct cl_channel_stats *ch_stats, u32 freq)
+{
+ memset(ch_stats, 0, sizeof(*ch_stats));
+
+ ch_stats->channel = ieee80211_frequency_to_channel(freq);
+ if (ch_stats->channel == INVALID_CHAN_IDX) {
+ cl_dbg_err(cl_hw, "Failed to get channel num for freq %u\n", freq);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+void cl_get_final_channel_stats(struct cl_hw *cl_hw, struct cl_channel_stats *ch_stats)
+{
+ u32 tx_mine, rx_mine, edca_cca;
+
+ /*
+ * Currently mac_hw_rx_mine_busy_get() doesn't work properly,
+ * so use mac_hw_edca_cca_busy_get() as workaround.
+ * After scan mac_hw_rx_mine_busy_get almost always returns zeros
+ * or some very small values.
+ * Use of EDCA CCA is less accurate, since it also includes non-wifi noise
+ */
+ tx_mine = mac_hw_tx_mine_busy_get(cl_hw);
+ edca_cca = mac_hw_edca_cca_busy_get(cl_hw);
+ rx_mine = edca_cca;
+
+ ch_stats->util_time_tx = max_t(s64, tx_mine - ch_stats->util_time_tx_total, 0);
+ ch_stats->util_time_rx = max_t(s64, rx_mine - ch_stats->util_time_rx_total, 0);
+ ch_stats->edca_cca_time = max_t(s64, edca_cca - ch_stats->edca_cca_time_total, 0);
+
+ ch_stats->util_time_busy = ch_stats->edca_cca_time + ch_stats->util_time_tx;
+
+ ch_stats->ch_noise = cl_calc_noise_floor(cl_hw, NULL);
+
+ ch_stats->scan_time_ms = jiffies_to_msecs(get_jiffies_64() - ch_stats->scan_start_jiffies);
+}
+
+void cl_get_initial_channel_stats(struct cl_hw *cl_hw, struct cl_channel_stats *ch_stats)
+{
+ ch_stats->util_time_tx = 0;
+ ch_stats->util_time_rx = 0;
+ ch_stats->edca_cca_time = 0;
+ ch_stats->util_time_busy = 0;
+ ch_stats->ch_noise = 0;
+ ch_stats->scan_time_ms = 0;
+
+ ch_stats->util_time_tx_total = mac_hw_tx_mine_busy_get(cl_hw);
+ ch_stats->edca_cca_time_total = mac_hw_edca_cca_busy_get(cl_hw);
+ ch_stats->util_time_rx_total = ch_stats->edca_cca_time_total;
+
+ ch_stats->scan_start_jiffies = get_jiffies_64();
+}
--
2.36.1


2022-05-24 16:02:23

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 01/96] celeno: add Kconfig

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/Kconfig | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
create mode 100755 drivers/net/wireless/celeno/Kconfig

diff --git a/drivers/net/wireless/celeno/Kconfig b/drivers/net/wireless/celeno/Kconfig
new file mode 100755
index 000000000000..a5e8a9af1ee1
--- /dev/null
+++ b/drivers/net/wireless/celeno/Kconfig
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+config WLAN_VENDOR_CELENO
+ bool "Celeno devices"
+ default y
+ help
+ If you have a wireless card belonging to this class, say Y.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all the
+ questions about these cards. If you say Y, you will be asked for
+ your specific card in the following questions.
+
+if WLAN_VENDOR_CELENO
+
+source "drivers/net/wireless/celeno/cl8k/Kconfig"
+
+endif # WLAN_VENDOR_CELENO
--
2.36.1


2022-05-24 16:02:31

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 93/96] cl8k: add vns.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/vns.h | 65 ++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/vns.h

diff --git a/drivers/net/wireless/celeno/cl8k/vns.h b/drivers/net/wireless/celeno/cl8k/vns.h
new file mode 100644
index 000000000000..904fa7a7fe1b
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/vns.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_VNS_H
+#define CL_VNS_H
+
+#include "def.h"
+
+/**
+ * DOC: VNS (=Very Near STA)
+ *
+ * Feature is responsible for TX power adjustment regarding to the STA
+ * location. Near stations should get signal with lower power to avoid
+ * saturation. Power is contolled for both transmitted data (%VNS_MODE_DATA)
+ * and autoresponse frames (%VNS_AUTO_REPLY), including both cases for
+ * connected and not connected stations.
+ *
+ * In order to determine, whether a station is in VNS range, we rely on the
+ * RSSI values, received from the firmware for every RX frame.
+ */
+
+#define VNS_MODE_DATA 0x1
+#define VNS_MODE_AUTO_REPLY 0x2
+#define VNS_MODE_ALL (VNS_MODE_DATA | VNS_MODE_AUTO_REPLY)
+
+struct cl_vns_rssi_entry {
+ struct list_head list_all;
+ struct list_head list_addr;
+ unsigned long timestamp;
+ s8 strongset_rssi;
+ u8 addr[ETH_ALEN];
+};
+
+struct cl_vns_mgmt_db {
+ u32 num_entries;
+ struct list_head list_all;
+ struct list_head list_addr[STA_HASH_SIZE];
+};
+
+struct cl_vns_db {
+ bool enable;
+ bool dbg;
+ bool dbg_per_packet;
+ u16 interval_period;
+ spinlock_t lock;
+ struct cl_vns_mgmt_db mgmt_db;
+};
+
+struct cl_vns_sta_db {
+ bool is_very_near;
+ bool prev_decision;
+ s32 rssi_sum[MAX_ANTENNAS];
+ s32 rssi_samples;
+};
+
+int cl_vns_init(struct cl_hw *cl_hw);
+void cl_vns_close(struct cl_hw *cl_hw);
+void cl_vns_maintenance(struct cl_hw *cl_hw);
+void cl_vns_mgmt_handler(struct cl_hw *cl_hw, u8 *addr, s8 rssi[MAX_ANTENNAS]);
+bool cl_vns_is_very_near(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct sk_buff *skb);
+void cl_vns_sta_add(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_vns_handle_rssi(struct cl_hw *cl_hw, struct cl_sta *cl_sta, s8 rssi[MAX_ANTENNAS]);
+void cl_vns_recovery(struct cl_hw *cl_hw);
+
+#endif /* CL_VNS_H */
--
2.36.1


2022-05-24 16:03:24

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 91/96] cl8k: add vif.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/vif.h | 81 ++++++++++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/vif.h

diff --git a/drivers/net/wireless/celeno/cl8k/vif.h b/drivers/net/wireless/celeno/cl8k/vif.h
new file mode 100644
index 000000000000..2563c6c3222d
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/vif.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_VIF_H
+#define CL_VIF_H
+
+#include <linux/netdevice.h>
+
+#include "wrs.h"
+
+struct cl_connection_data {
+ u32 max_client; /* MAX Clients of the SSID */
+ u32 max_client_timestamp; /* MAX Clients Timestamp of the SSID */
+ u32 watermark_threshold; /* Number of clients threshold for watermark */
+ u32 watermark_reached_cnt; /* Number of times the watermark threshold was reached */
+};
+
+struct cl_traffic_counters {
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 rx_packets;
+ u64 rx_bytes;
+ u32 tx_errors;
+ u32 rx_errors;
+ u32 tx_dropped;
+ u32 rx_dropped;
+};
+
+/*
+ * Structure used to save information relative to the managed interfaces.
+ * Will be used as the 'drv_priv' field of the "struct ieee80211_vif" structure.
+ * This is also linked within the cl_hw vifs list.
+ */
+struct cl_vif {
+ struct list_head list;
+ struct cl_hw *cl_hw;
+ struct ieee80211_vif *vif;
+ struct net_device *dev;
+ struct list_head key_list_head;
+ struct cl_key_conf *key;
+ int key_idx_default;
+ u32 unicast_tx;
+ u32 unicast_rx;
+ u32 multicast_tx;
+ u32 multicast_rx;
+ u32 broadcast_tx;
+ u32 broadcast_rx;
+ u16 sequence_number;
+ u8 num_sta; /* Number of station connected per SSID */
+ struct cl_connection_data *conn_data;
+ u8 vif_index;
+ bool tx_en;
+ /* Holds info for channel utilization stats */
+ u32 chan_util_last_tx_bytes;
+ u32 chan_util;
+ struct mcast_table *mcast_table;
+ struct cl_wrs_rate_params fixed_params;
+ struct cl_traffic_counters trfc_cntrs[AC_MAX];
+ bool wmm_enabled;
+ u16 mesh_basic_rates;
+};
+
+struct cl_vif_db {
+ struct list_head head;
+ rwlock_t lock;
+ u8 num_iface_bcn;
+};
+
+void cl_vif_init(struct cl_hw *cl_hw);
+void cl_vif_add(struct cl_hw *cl_hw, struct cl_vif *cl_vif);
+void cl_vif_remove(struct cl_hw *cl_hw, struct cl_vif *cl_vif);
+struct cl_vif *cl_vif_get_next(struct cl_hw *cl_hw, struct cl_vif *cl_vif);
+struct cl_vif *cl_vif_get_by_dev(struct cl_hw *cl_hw, struct net_device *dev);
+struct cl_vif *cl_vif_get_by_mac(struct cl_hw *cl_hw, u8 *mac_addr);
+struct cl_vif *cl_vif_get_first(struct cl_hw *cl_hw);
+struct cl_vif *cl_vif_get_first_ap(struct cl_hw *cl_hw);
+struct net_device *cl_vif_get_first_net_device(struct cl_hw *cl_hw);
+struct net_device *cl_vif_get_dev_by_index(struct cl_hw *cl_hw, u8 index);
+void cl_vif_ap_tx_enable(struct cl_hw *cl_hw, bool enable);
+
+#endif /* CL_VIF_H */
--
2.36.1


2022-05-24 16:04:06

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 38/96] cl8k: add mac80211.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/mac80211.c | 2392 +++++++++++++++++++
1 file changed, 2392 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/mac80211.c

diff --git a/drivers/net/wireless/celeno/cl8k/mac80211.c b/drivers/net/wireless/celeno/cl8k/mac80211.c
new file mode 100644
index 000000000000..13989327ccdb
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/mac80211.c
@@ -0,0 +1,2392 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/log2.h>
+#include <net/mac80211.h>
+
+#include "debug.h"
+#include "utils.h"
+#include "sta.h"
+#include "dfs.h"
+#include "regdom.h"
+#include "hw.h"
+#include "mac80211.h"
+#include "ampdu.h"
+#include "tx.h"
+#include "radio.h"
+#include "recovery.h"
+#include "rates.h"
+#include "temperature.h"
+#include "vns.h"
+#include "key.h"
+#include "version.h"
+#include "power.h"
+#include "stats.h"
+#include "scan.h"
+#include "mac_addr.h"
+#include "chip.h"
+
+#define RATE_1_MBPS 10
+#define RATE_2_MBPS 20
+#define RATE_5_5_MBPS 55
+#define RATE_11_MBPS 110
+#define RATE_6_MBPS 60
+#define RATE_9_MBPS 90
+#define RATE_12_MBPS 120
+#define RATE_18_MBPS 180
+#define RATE_24_MBPS 240
+#define RATE_36_MBPS 360
+#define RATE_48_MBPS 480
+#define RATE_54_MBPS 540
+
+#define CL_HT_CAPABILITIES \
+{ \
+ .ht_supported = true, \
+ .cap = IEEE80211_HT_CAP_DSSSCCK40 | IEEE80211_HT_CAP_MAX_AMSDU, \
+ .ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K, \
+ .ampdu_density = IEEE80211_HT_MPDU_DENSITY_1, \
+ .mcs = { \
+ .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, \
+ .tx_params = IEEE80211_HT_MCS_TX_DEFINED, \
+ }, \
+}
+
+#define CL_VHT_CAPABILITIES \
+{ \
+ .vht_supported = false, \
+ .cap = 0, \
+ .vht_mcs = { \
+ .rx_mcs_map = cpu_to_le16( \
+ IEEE80211_VHT_MCS_SUPPORT_0_7 << 0 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 2 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 4 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 6 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 8 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 10 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 12 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 14), \
+ .tx_mcs_map = cpu_to_le16( \
+ IEEE80211_VHT_MCS_SUPPORT_0_7 << 0 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 2 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 4 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 6 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 8 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 10 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 12 | \
+ IEEE80211_VHT_MCS_NOT_SUPPORTED << 14), \
+ } \
+}
+
+#define CL_HE_CAP_ELEM_STATION \
+{ \
+ .mac_cap_info[0] = IEEE80211_HE_MAC_CAP0_HTC_HE, \
+ .mac_cap_info[1] = 0, \
+ .mac_cap_info[2] = 0, \
+ .mac_cap_info[3] = IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_EXT_2, \
+ .mac_cap_info[4] = IEEE80211_HE_MAC_CAP4_BQR, \
+ .mac_cap_info[5] = IEEE80211_HE_MAC_CAP5_HT_VHT_TRIG_FRAME_RX, \
+ .phy_cap_info[0] = IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_2G, \
+ .phy_cap_info[1] = IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A, \
+ .phy_cap_info[2] = 0, \
+ .phy_cap_info[3] = 0, \
+ .phy_cap_info[4] = 0, \
+ .phy_cap_info[5] = 0, \
+ .phy_cap_info[6] = 0, \
+ .phy_cap_info[7] = 0, \
+ .phy_cap_info[8] = IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G, \
+ .phy_cap_info[9] = IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US, \
+ .phy_cap_info[10] = 0, \
+}
+
+#define CL_HE_CAP_ELEM_AP \
+{ \
+ .mac_cap_info[0] = IEEE80211_HE_MAC_CAP0_HTC_HE, \
+ .mac_cap_info[1] = 0, \
+ .mac_cap_info[2] = 0, \
+ .mac_cap_info[3] = IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_EXT_2, \
+ .mac_cap_info[4] = IEEE80211_HE_MAC_CAP4_BQR, \
+ .mac_cap_info[5] = 0, \
+ .phy_cap_info[0] = 0, \
+ .phy_cap_info[1] = IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A, \
+ .phy_cap_info[2] = 0, \
+ .phy_cap_info[3] = 0, \
+ .phy_cap_info[4] = 0, \
+ .phy_cap_info[5] = 0, \
+ .phy_cap_info[6] = 0, \
+ .phy_cap_info[7] = 0, \
+ .phy_cap_info[8] = 0, \
+ .phy_cap_info[9] = IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US, \
+ .phy_cap_info[10] = 0, \
+}
+
+#define CL_HE_CAP_ELEM_MESH_POINT \
+{ \
+ .mac_cap_info[0] = IEEE80211_HE_MAC_CAP0_HTC_HE, \
+ .mac_cap_info[1] = 0, \
+ .mac_cap_info[2] = 0, \
+ .mac_cap_info[3] = IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_EXT_2, \
+ .mac_cap_info[4] = IEEE80211_HE_MAC_CAP4_BQR, \
+ .mac_cap_info[5] = 0, \
+ .phy_cap_info[0] = 0, \
+ .phy_cap_info[1] = IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A, \
+ .phy_cap_info[2] = 0, \
+ .phy_cap_info[3] = 0, \
+ .phy_cap_info[4] = 0, \
+ .phy_cap_info[5] = 0, \
+ .phy_cap_info[6] = 0, \
+ .phy_cap_info[7] = 0, \
+ .phy_cap_info[8] = 0, \
+ .phy_cap_info[9] = IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US, \
+ .phy_cap_info[10] = 0, \
+}
+
+#define CL_HE_MCS_NSS_SUPP \
+{ \
+ .rx_mcs_80 = cpu_to_le16(0xff00), \
+ .tx_mcs_80 = cpu_to_le16(0xff00), \
+ .rx_mcs_160 = cpu_to_le16(0xff00), \
+ .tx_mcs_160 = cpu_to_le16(0xff00), \
+ .rx_mcs_80p80 = cpu_to_le16(0xffff), \
+ .tx_mcs_80p80 = cpu_to_le16(0xffff), \
+}
+
+#define RATE(_bitrate, _hw_rate, _flags) { \
+ .bitrate = (_bitrate), \
+ .flags = (_flags), \
+ .hw_value = (_hw_rate), \
+}
+
+#define CHAN(_freq, _idx) { \
+ .center_freq = (_freq), \
+ .hw_value = (_idx), \
+ .max_power = 18, \
+}
+
+static struct ieee80211_sband_iftype_data cl_he_data[] = {
+ {
+ .types_mask = BIT(NL80211_IFTYPE_STATION),
+ .he_cap = {
+ .has_he = true,
+ .he_cap_elem = CL_HE_CAP_ELEM_STATION,
+ .he_mcs_nss_supp = CL_HE_MCS_NSS_SUPP,
+ },
+ },
+ {
+ .types_mask = BIT(NL80211_IFTYPE_AP),
+ .he_cap = {
+ .has_he = true,
+ .he_cap_elem = CL_HE_CAP_ELEM_AP,
+ .he_mcs_nss_supp = CL_HE_MCS_NSS_SUPP,
+ },
+ },
+ {
+ .types_mask = BIT(NL80211_IFTYPE_MESH_POINT),
+ .he_cap = {
+ .has_he = true,
+ .he_cap_elem = CL_HE_CAP_ELEM_MESH_POINT,
+ .he_mcs_nss_supp = CL_HE_MCS_NSS_SUPP,
+ },
+ },
+};
+
+static struct ieee80211_rate cl_ratetable[] = {
+ RATE(10, 0x00, 0),
+ RATE(20, 0x01, IEEE80211_RATE_SHORT_PREAMBLE),
+ RATE(55, 0x02, IEEE80211_RATE_SHORT_PREAMBLE),
+ RATE(110, 0x03, IEEE80211_RATE_SHORT_PREAMBLE),
+ RATE(60, 0x04, 0),
+ RATE(90, 0x05, 0),
+ RATE(120, 0x06, 0),
+ RATE(180, 0x07, 0),
+ RATE(240, 0x08, 0),
+ RATE(360, 0x09, 0),
+ RATE(480, 0x0A, 0),
+ RATE(540, 0x0B, 0),
+};
+
+/* The channels indexes here are not used anymore */
+static struct ieee80211_channel cl_2ghz_channels[] = {
+ CHAN(2412, 0),
+ CHAN(2417, 1),
+ CHAN(2422, 2),
+ CHAN(2427, 3),
+ CHAN(2432, 4),
+ CHAN(2437, 5),
+ CHAN(2442, 6),
+ CHAN(2447, 7),
+ CHAN(2452, 8),
+ CHAN(2457, 9),
+ CHAN(2462, 10),
+ CHAN(2467, 11),
+ CHAN(2472, 12),
+ CHAN(2484, 13),
+};
+
+static struct ieee80211_channel cl_5ghz_channels[] = {
+ CHAN(5180, 0), /* 36 - 20MHz */
+ CHAN(5200, 1), /* 40 - 20MHz */
+ CHAN(5220, 2), /* 44 - 20MHz */
+ CHAN(5240, 3), /* 48 - 20MHz */
+ CHAN(5260, 4), /* 52 - 20MHz */
+ CHAN(5280, 5), /* 56 - 20MHz */
+ CHAN(5300, 6), /* 60 - 20MHz */
+ CHAN(5320, 7), /* 64 - 20MHz */
+ CHAN(5500, 8), /* 100 - 20MHz */
+ CHAN(5520, 9), /* 104 - 20MHz */
+ CHAN(5540, 10), /* 108 - 20MHz */
+ CHAN(5560, 11), /* 112 - 20MHz */
+ CHAN(5580, 12), /* 116 - 20MHz */
+ CHAN(5600, 13), /* 120 - 20MHz */
+ CHAN(5620, 14), /* 124 - 20MHz */
+ CHAN(5640, 15), /* 128 - 20MHz */
+ CHAN(5660, 16), /* 132 - 20MHz */
+ CHAN(5680, 17), /* 136 - 20MHz */
+ CHAN(5700, 18), /* 140 - 20MHz */
+ CHAN(5720, 19), /* 144 - 20MHz */
+ CHAN(5745, 20), /* 149 - 20MHz */
+ CHAN(5765, 21), /* 153 - 20MHz */
+ CHAN(5785, 22), /* 157 - 20MHz */
+ CHAN(5805, 23), /* 161 - 20MHz */
+ CHAN(5825, 24), /* 165 - 20MHz */
+};
+
+static struct ieee80211_channel cl_6ghz_channels[] = {
+ CHAN(5955, 1), /* 1 - 20MHz */
+ CHAN(5935, 2), /* 2 - 20MHz */
+ CHAN(5975, 5), /* 5 - 20MHz */
+ CHAN(5995, 9), /* 9 - 20MHz */
+ CHAN(6015, 13), /* 13 - 20MHz */
+ CHAN(6035, 17), /* 17 - 20MHz */
+ CHAN(6055, 21), /* 21 - 20MHz */
+ CHAN(6075, 25), /* 25 - 20MHz */
+ CHAN(6095, 29), /* 29 - 20MHz */
+ CHAN(6115, 33), /* 33 - 20MHz */
+ CHAN(6135, 37), /* 37 - 20MHz */
+ CHAN(6155, 41), /* 41 - 20MHz */
+ CHAN(6175, 45), /* 45 - 20MHz */
+ CHAN(6195, 49), /* 49 - 20MHz */
+ CHAN(6215, 53), /* 53 - 20MHz */
+ CHAN(6235, 57), /* 57 - 20MHz */
+ CHAN(6255, 61), /* 61 - 20MHz */
+ CHAN(6275, 65), /* 65 - 20MHz */
+ CHAN(6295, 69), /* 69 - 20MHz */
+ CHAN(6315, 73), /* 73 - 20MHz */
+ CHAN(6335, 77), /* 77 - 20MHz */
+ CHAN(6355, 81), /* 81 - 20MHz */
+ CHAN(6375, 85), /* 85 - 20MHz */
+ CHAN(6395, 89), /* 89 - 20MHz */
+ CHAN(6415, 93), /* 93 - 20MHz */
+ CHAN(6435, 97), /* 97 - 20MHz */
+ CHAN(6455, 101), /* 101 - 20MHz */
+ CHAN(6475, 105), /* 105 - 20MHz */
+ CHAN(6495, 109), /* 109 - 20MHz */
+ CHAN(6515, 113), /* 113 - 20MHz */
+ CHAN(6535, 117), /* 117 - 20MHz */
+ CHAN(6555, 121), /* 121 - 20MHz */
+ CHAN(6575, 125), /* 125 - 20MHz */
+ CHAN(6595, 129), /* 129 - 20MHz */
+ CHAN(6615, 133), /* 133 - 20MHz */
+ CHAN(6635, 137), /* 137 - 20MHz */
+ CHAN(6655, 141), /* 141 - 20MHz */
+ CHAN(6675, 145), /* 145 - 20MHz */
+ CHAN(6695, 149), /* 149 - 20MHz */
+ CHAN(6715, 153), /* 153 - 20MHz */
+ CHAN(6735, 157), /* 157 - 20MHz */
+ CHAN(6755, 161), /* 161 - 20MHz */
+ CHAN(6775, 165), /* 165 - 20MHz */
+ CHAN(6795, 169), /* 169 - 20MHz */
+ CHAN(6815, 173), /* 173 - 20MHz */
+ CHAN(6835, 177), /* 177 - 20MHz */
+ CHAN(6855, 181), /* 181 - 20MHz */
+ CHAN(6875, 188), /* 185 - 20MHz */
+ CHAN(6895, 189), /* 189 - 20MHz */
+ CHAN(6915, 193), /* 193 - 20MHz */
+ CHAN(6935, 197), /* 197 - 20MHz */
+ CHAN(6955, 201), /* 201 - 20MHz */
+ CHAN(6975, 205), /* 205 - 20MHz */
+ CHAN(6995, 209), /* 209 - 20MHz */
+ CHAN(7015, 213), /* 213 - 20MHz */
+ CHAN(7035, 217), /* 217 - 20MHz */
+ CHAN(7055, 221), /* 221 - 20MHz */
+ CHAN(7075, 225), /* 225 - 20MHz */
+ CHAN(7095, 229), /* 229 - 20MHz */
+ CHAN(7115, 233), /* 233 - 20MHz */
+};
+
+static struct ieee80211_supported_band cl_band_2ghz = {
+ .channels = cl_2ghz_channels,
+ .n_channels = ARRAY_SIZE(cl_2ghz_channels),
+ .bitrates = cl_ratetable,
+ .n_bitrates = ARRAY_SIZE(cl_ratetable),
+ .ht_cap = CL_HT_CAPABILITIES,
+ .vht_cap = CL_VHT_CAPABILITIES,
+};
+
+static struct ieee80211_supported_band cl_band_5ghz = {
+ .channels = cl_5ghz_channels,
+ .n_channels = ARRAY_SIZE(cl_5ghz_channels),
+ .bitrates = &cl_ratetable[4],
+ .n_bitrates = ARRAY_SIZE(cl_ratetable) - 4,
+ .ht_cap = CL_HT_CAPABILITIES,
+ .vht_cap = CL_VHT_CAPABILITIES,
+};
+
+static struct ieee80211_supported_band cl_band_6ghz = {
+ .channels = cl_6ghz_channels,
+ .n_channels = ARRAY_SIZE(cl_6ghz_channels),
+ .bitrates = &cl_ratetable[4],
+ .n_bitrates = ARRAY_SIZE(cl_ratetable) - 4,
+};
+
+static const struct ieee80211_iface_limit cl_limits[] = {
+ {
+ .max = ARRAY_SIZE(((struct cl_hw *)0)->addresses),
+ .types = BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_STATION) |
+ BIT(NL80211_IFTYPE_MESH_POINT),
+ },
+};
+
+#define WLAN_EXT_CAPA1_2040_BSS_COEX_MGMT_ENABLED BIT(0)
+
+static u8 cl_if_types_ext_capa_ap_24g[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+};
+
+static const struct wiphy_iftype_ext_capab cl_iftypes_ext_capa_24g[] = {
+ {
+ .iftype = NL80211_IFTYPE_AP,
+ .extended_capabilities = cl_if_types_ext_capa_ap_24g,
+ .extended_capabilities_mask = cl_if_types_ext_capa_ap_24g,
+ .extended_capabilities_len = sizeof(cl_if_types_ext_capa_ap_24g),
+ },
+};
+
+static u8 cl_if_types_ext_capa_ap_5g[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+};
+
+static const struct wiphy_iftype_ext_capab cl_iftypes_ext_capa_5g[] = {
+ {
+ .iftype = NL80211_IFTYPE_AP,
+ .extended_capabilities = cl_if_types_ext_capa_ap_5g,
+ .extended_capabilities_mask = cl_if_types_ext_capa_ap_5g,
+ .extended_capabilities_len = sizeof(cl_if_types_ext_capa_ap_5g),
+ },
+};
+
+static u8 cl_if_types_ext_capa_ap_6g[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+};
+
+static const struct wiphy_iftype_ext_capab cl_iftypes_ext_capa_6g[] = {
+ {
+ .iftype = NL80211_IFTYPE_AP,
+ .extended_capabilities = cl_if_types_ext_capa_ap_6g,
+ .extended_capabilities_mask = cl_if_types_ext_capa_ap_6g,
+ .extended_capabilities_len = sizeof(cl_if_types_ext_capa_ap_6g),
+ },
+};
+
+static struct ieee80211_iface_combination cl_combinations[] = {
+ {
+ .limits = cl_limits,
+ .n_limits = ARRAY_SIZE(cl_limits),
+ .num_different_channels = 1,
+ .max_interfaces = ARRAY_SIZE(((struct cl_hw *)0)->addresses),
+ .beacon_int_min_gcd = 100,
+ .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20) |
+ BIT(NL80211_CHAN_WIDTH_40) |
+ BIT(NL80211_CHAN_WIDTH_80) |
+ BIT(NL80211_CHAN_WIDTH_160),
+ }
+};
+
+static const int cl_ac2hwq[AC_MAX] = {
+ [NL80211_TXQ_Q_VO] = CL_HWQ_VO,
+ [NL80211_TXQ_Q_VI] = CL_HWQ_VI,
+ [NL80211_TXQ_Q_BE] = CL_HWQ_BE,
+ [NL80211_TXQ_Q_BK] = CL_HWQ_BK
+};
+
+static const int cl_ac2edca[AC_MAX] = {
+ [NL80211_TXQ_Q_VO] = EDCA_AC_VO,
+ [NL80211_TXQ_Q_VI] = EDCA_AC_VI,
+ [NL80211_TXQ_Q_BE] = EDCA_AC_BE,
+ [NL80211_TXQ_Q_BK] = EDCA_AC_BK
+};
+
+static u8 cl_he_mcs_supp_tx(struct cl_hw *cl_hw, u8 nss)
+{
+ u8 mcs = cl_hw->conf->ce_he_mcs_nss_supp_tx[nss];
+
+ switch (mcs) {
+ case WRS_MCS_7:
+ return IEEE80211_HE_MCS_SUPPORT_0_7;
+ case WRS_MCS_9:
+ return IEEE80211_HE_MCS_SUPPORT_0_9;
+ case WRS_MCS_11:
+ return IEEE80211_HE_MCS_SUPPORT_0_11;
+ }
+
+ cl_dbg_err(cl_hw, "Invalid mcs %u for nss %u. Must be 7, 9 or 11!\n", mcs, nss);
+ return IEEE80211_HE_MCS_NOT_SUPPORTED;
+}
+
+static u8 cl_he_mcs_supp_rx(struct cl_hw *cl_hw, u8 nss)
+{
+ u8 mcs = cl_hw->conf->ce_he_mcs_nss_supp_rx[nss];
+
+ switch (mcs) {
+ case WRS_MCS_7:
+ return IEEE80211_HE_MCS_SUPPORT_0_7;
+ case WRS_MCS_9:
+ return IEEE80211_HE_MCS_SUPPORT_0_9;
+ case WRS_MCS_11:
+ return IEEE80211_HE_MCS_SUPPORT_0_11;
+ }
+
+ cl_dbg_err(cl_hw, "Invalid mcs %u for nss %u. Must be 7, 9 or 11!\n", mcs, nss);
+ return IEEE80211_HE_MCS_NOT_SUPPORTED;
+}
+
+static u8 cl_vht_mcs_supp_tx(struct cl_hw *cl_hw, u8 nss)
+{
+ u8 mcs = cl_hw->conf->ce_vht_mcs_nss_supp_tx[nss];
+
+ switch (mcs) {
+ case WRS_MCS_7:
+ return IEEE80211_VHT_MCS_SUPPORT_0_7;
+ case WRS_MCS_8:
+ return IEEE80211_VHT_MCS_SUPPORT_0_8;
+ case WRS_MCS_9:
+ return IEEE80211_VHT_MCS_SUPPORT_0_9;
+ }
+
+ cl_dbg_err(cl_hw, "Invalid mcs %u for nss %u. Must be 7-9!\n", mcs, nss);
+ return IEEE80211_VHT_MCS_NOT_SUPPORTED;
+}
+
+static u8 cl_vht_mcs_supp_rx(struct cl_hw *cl_hw, u8 nss)
+{
+ u8 mcs = cl_hw->conf->ce_vht_mcs_nss_supp_rx[nss];
+
+ switch (mcs) {
+ case WRS_MCS_7:
+ return IEEE80211_VHT_MCS_SUPPORT_0_7;
+ case WRS_MCS_8:
+ return IEEE80211_VHT_MCS_SUPPORT_0_8;
+ case WRS_MCS_9:
+ return IEEE80211_VHT_MCS_SUPPORT_0_9;
+ }
+
+ cl_dbg_err(cl_hw, "Invalid mcs %u for nss %u. Must be 7-9!\n", mcs, nss);
+ return IEEE80211_VHT_MCS_NOT_SUPPORTED;
+}
+
+static void cl_set_he_6ghz_capab(struct cl_hw *cl_hw)
+{
+ struct ieee80211_he_6ghz_capa *he_6ghz_cap0 = &cl_hw->iftype_data[0].he_6ghz_capa;
+ struct ieee80211_he_6ghz_capa *he_6ghz_cap1 = &cl_hw->iftype_data[1].he_6ghz_capa;
+ struct ieee80211_he_6ghz_capa *he_6ghz_cap2 = &cl_hw->iftype_data[2].he_6ghz_capa;
+
+ he_6ghz_cap0->capa = cpu_to_le16(IEEE80211_HT_MPDU_DENSITY_1);
+
+ he_6ghz_cap0->capa |=
+ cpu_to_le16(cl_hw->conf->ci_max_mpdu_len << HE_6GHZ_CAP_MAX_MPDU_LEN_OFFSET);
+ he_6ghz_cap0->capa |=
+ cpu_to_le16(IEEE80211_VHT_MAX_AMPDU_1024K << HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP_OFFSET);
+
+ he_6ghz_cap0->capa |= cpu_to_le16(IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS |
+ IEEE80211_HE_6GHZ_CAP_TX_ANTPAT_CONS);
+
+ he_6ghz_cap1->capa = he_6ghz_cap0->capa;
+ he_6ghz_cap2->capa = he_6ghz_cap0->capa;
+}
+
+static void _cl_set_he_capab(struct cl_hw *cl_hw, u8 idx)
+{
+ struct ieee80211_sta_he_cap *he_cap = &cl_hw->iftype_data[idx].he_cap;
+ struct ieee80211_he_mcs_nss_supp *he_mcs_nss_supp = &he_cap->he_mcs_nss_supp;
+ struct ieee80211_he_cap_elem *he_cap_elem = &he_cap->he_cap_elem;
+ u8 rx_nss = cl_hw->conf->ce_rx_nss;
+ u8 tx_nss = cl_hw->conf->ce_tx_nss;
+ int i = 0;
+
+ if (BAND_IS_5G_6G(cl_hw)) {
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
+
+ for (i = 0; i < rx_nss; i++)
+ he_mcs_nss_supp->rx_mcs_160 |=
+ cpu_to_le16(cl_he_mcs_supp_rx(cl_hw, i) << (i * 2));
+
+ for (i = 0; i < tx_nss; i++)
+ he_mcs_nss_supp->tx_mcs_160 |=
+ cpu_to_le16(cl_he_mcs_supp_tx(cl_hw, i) << (i * 2));
+
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G;
+
+ for (i = 0; i < rx_nss; i++)
+ he_mcs_nss_supp->rx_mcs_80 |=
+ cpu_to_le16(cl_he_mcs_supp_rx(cl_hw, i) << (i * 2));
+
+ for (i = 0; i < tx_nss; i++)
+ he_mcs_nss_supp->tx_mcs_80 |=
+ cpu_to_le16(cl_he_mcs_supp_tx(cl_hw, i) << (i * 2));
+ } else {
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G;
+
+ for (i = 0; i < rx_nss; i++)
+ he_mcs_nss_supp->rx_mcs_80 |=
+ cpu_to_le16(cl_he_mcs_supp_rx(cl_hw, i) << (i * 2));
+
+ for (i = 0; i < tx_nss; i++)
+ he_mcs_nss_supp->tx_mcs_80 |=
+ cpu_to_le16(cl_he_mcs_supp_tx(cl_hw, i) << (i * 2));
+ }
+
+ for (i = rx_nss; i < 8; i++) {
+ he_mcs_nss_supp->rx_mcs_80 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2));
+ he_mcs_nss_supp->rx_mcs_160 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2));
+ }
+
+ for (i = tx_nss; i < 8; i++) {
+ he_mcs_nss_supp->tx_mcs_80 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2));
+ he_mcs_nss_supp->tx_mcs_160 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2));
+ }
+
+ if (cl_hw->conf->ci_he_rxldpc_en)
+ he_cap_elem->phy_cap_info[1] |=
+ IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD;
+
+ if (cl_hw->conf->ci_rx_he_mu_ppdu)
+ he_cap_elem->phy_cap_info[3] |=
+ IEEE80211_HE_PHY_CAP3_RX_PARTIAL_BW_SU_IN_20MHZ_MU;
+
+ if (cl_hw->conf->ci_bf_en) {
+ he_cap_elem->phy_cap_info[3] |=
+ IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER;
+ he_cap_elem->phy_cap_info[5] |=
+ IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_4;
+ }
+}
+
+static void cl_set_he_capab(struct cl_hw *cl_hw)
+{
+ struct ieee80211_sta_he_cap *he_cap0 = &cl_hw->iftype_data[0].he_cap;
+ struct ieee80211_he_cap_elem *he_cap_elem = &he_cap0->he_cap_elem;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ u8 tf_mac_pad_dur = conf->ci_tf_mac_pad_dur;
+
+ memcpy(&cl_hw->iftype_data, cl_he_data, sizeof(cl_hw->iftype_data));
+
+ if (tf_mac_pad_dur == 1)
+ he_cap_elem->mac_cap_info[1] |= IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_8US;
+ else if (tf_mac_pad_dur == 2)
+ he_cap_elem->mac_cap_info[1] |= IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US;
+
+ _cl_set_he_capab(cl_hw, 0);
+ _cl_set_he_capab(cl_hw, 1);
+ _cl_set_he_capab(cl_hw, 2);
+
+ if (cl_band_is_6g(cl_hw))
+ cl_set_he_6ghz_capab(cl_hw);
+
+ cl_hw->sband.n_iftype_data = ARRAY_SIZE(cl_he_data);
+ cl_hw->sband.iftype_data = cl_hw->iftype_data;
+}
+
+#define RATE_1_MBPS 10
+#define RATE_2_MBPS 20
+#define RATE_5_5_MBPS 55
+#define RATE_11_MBPS 110
+#define RATE_6_MBPS 60
+#define RATE_9_MBPS 90
+#define RATE_12_MBPS 120
+#define RATE_18_MBPS 180
+#define RATE_24_MBPS 240
+#define RATE_36_MBPS 360
+#define RATE_48_MBPS 480
+#define RATE_54_MBPS 540
+
+static u16 cl_cap_convert_rate_to_bitmap(u16 rate)
+{
+ switch (rate) {
+ case RATE_1_MBPS:
+ return BIT(0);
+ case RATE_2_MBPS:
+ return BIT(1);
+ case RATE_5_5_MBPS:
+ return BIT(2);
+ case RATE_11_MBPS:
+ return BIT(3);
+ case RATE_6_MBPS:
+ return BIT(4);
+ case RATE_9_MBPS:
+ return BIT(5);
+ case RATE_12_MBPS:
+ return BIT(6);
+ case RATE_18_MBPS:
+ return BIT(7);
+ case RATE_24_MBPS:
+ return BIT(8);
+ case RATE_36_MBPS:
+ return BIT(9);
+ case RATE_48_MBPS:
+ return BIT(10);
+ case RATE_54_MBPS:
+ return BIT(11);
+ default:
+ return 0;
+ }
+}
+
+u16 cl_cap_set_mesh_basic_rates(struct cl_hw *cl_hw)
+{
+ int i;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ u16 basic_rates = 0;
+
+ for (i = 0; i < MESH_BASIC_RATE_MAX; i++)
+ basic_rates |= cl_cap_convert_rate_to_bitmap(conf->ci_mesh_basic_rates[i]);
+
+ return basic_rates;
+}
+
+void cl_cap_dyn_params(struct cl_hw *cl_hw)
+{
+ struct ieee80211_hw *hw = cl_hw->hw;
+ struct wiphy *wiphy = hw->wiphy;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ u8 rx_nss = conf->ce_rx_nss;
+ u8 tx_nss = conf->ce_tx_nss;
+ u8 guard_interval = conf->ci_short_guard_interval;
+ u8 i;
+ u8 bw = cl_hw->conf->ci_cap_bandwidth;
+ struct ieee80211_supported_band *sband = &cl_hw->sband;
+ struct ieee80211_sta_ht_cap *sband_ht_cap = &sband->ht_cap;
+ struct ieee80211_sta_vht_cap *sband_vht_cap = &sband->vht_cap;
+
+ if (cl_band_is_6g(cl_hw)) {
+ memcpy(sband, &cl_band_6ghz, sizeof(struct ieee80211_supported_band));
+ } else if (cl_band_is_5g(cl_hw)) {
+ memcpy(sband, &cl_band_5ghz, sizeof(struct ieee80211_supported_band));
+ } else {
+ memcpy(sband, &cl_band_2ghz, sizeof(struct ieee80211_supported_band));
+
+ if (!conf->ci_vht_cap_24g)
+ memset(&sband->vht_cap, 0, sizeof(struct ieee80211_sta_vht_cap));
+ }
+
+ /* 6GHz doesn't support HT/VHT */
+ if (!cl_band_is_6g(cl_hw)) {
+ if (bw > CHNL_BW_20)
+ sband_ht_cap->cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+
+ /* Guard_interval */
+ if (guard_interval) {
+ sband_ht_cap->cap |= IEEE80211_HT_CAP_SGI_20;
+
+ if (bw >= CHNL_BW_40)
+ sband_ht_cap->cap |= IEEE80211_HT_CAP_SGI_40;
+
+ if (bw >= CHNL_BW_80)
+ sband_vht_cap->cap |= IEEE80211_VHT_CAP_SHORT_GI_80;
+
+ if (bw == CHNL_BW_160)
+ sband_vht_cap->cap |= IEEE80211_VHT_CAP_SHORT_GI_160;
+ }
+ }
+
+ /* Amsdu */
+ cl_rx_amsdu_hw_en(hw, conf->ce_rxamsdu_en);
+ cl_hw->txamsdu_en = conf->ce_txamsdu_en;
+
+ /* Hw flags */
+ ieee80211_hw_set(hw, HOST_BROADCAST_PS_BUFFERING);
+ ieee80211_hw_set(hw, SIGNAL_DBM);
+ ieee80211_hw_set(hw, REPORTS_TX_ACK_STATUS);
+ ieee80211_hw_set(hw, QUEUE_CONTROL);
+ ieee80211_hw_set(hw, WANT_MONITOR_VIF);
+ ieee80211_hw_set(hw, SPECTRUM_MGMT);
+ ieee80211_hw_set(hw, SUPPORTS_HT_CCK_RATES);
+ ieee80211_hw_set(hw, HAS_RATE_CONTROL);
+ ieee80211_hw_set(hw, SUPPORT_FAST_XMIT);
+ ieee80211_hw_set(hw, NO_AUTO_VIF);
+ ieee80211_hw_set(hw, MFP_CAPABLE);
+ ieee80211_hw_set(hw, SUPPORTS_PER_STA_GTK);
+ ieee80211_hw_set(hw, SUPPORTS_TX_ENCAP_OFFLOAD);
+
+ wiphy->features |= NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE;
+ wiphy->features |= NL80211_FEATURE_AP_SCAN;
+ wiphy->available_antennas_tx = ANT_MASK(cl_hw->max_antennas);
+ wiphy->available_antennas_rx = ANT_MASK(cl_hw->max_antennas);
+
+ wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_SET_SCAN_DWELL);
+
+ if (conf->ci_fast_rx_en) {
+ ieee80211_hw_set(hw, SUPPORTS_REORDERING_BUFFER);
+ ieee80211_hw_set(hw, AP_LINK_PS);
+ }
+
+ if (cl_band_is_6g(cl_hw)) {
+ hw->wiphy->iftype_ext_capab = cl_iftypes_ext_capa_6g;
+ hw->wiphy->num_iftype_ext_capab = ARRAY_SIZE(cl_iftypes_ext_capa_6g);
+ } else if (cl_band_is_5g(cl_hw)) {
+ hw->wiphy->iftype_ext_capab = cl_iftypes_ext_capa_5g;
+ hw->wiphy->num_iftype_ext_capab = ARRAY_SIZE(cl_iftypes_ext_capa_5g);
+ } else if (cl_band_is_24g(cl_hw)) {
+ /* Turn on "20/40 Coex Mgmt Support" bit (24g only) */
+ if (conf->ce_acs_coex_en) {
+ u8 *ext_cap = (u8 *)cl_iftypes_ext_capa_24g[0].extended_capabilities;
+
+ ext_cap[0] |= WLAN_EXT_CAPA1_2040_BSS_COEX_MGMT_ENABLED;
+ }
+
+ hw->wiphy->iftype_ext_capab = cl_iftypes_ext_capa_24g;
+ hw->wiphy->num_iftype_ext_capab = ARRAY_SIZE(cl_iftypes_ext_capa_24g);
+ }
+
+ /*
+ * To disable the dynamic PS we say to the stack that we support it in
+ * HW. This will force mac80211 rely on us to handle this.
+ */
+ ieee80211_hw_set(hw, SUPPORTS_DYNAMIC_PS);
+
+ if (conf->ci_agg_tx)
+ ieee80211_hw_set(hw, AMPDU_AGGREGATION);
+
+ wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
+ BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_MESH_POINT);
+
+ wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL |
+ WIPHY_FLAG_HAS_CHANNEL_SWITCH |
+ WIPHY_FLAG_IBSS_RSN;
+
+ if (conf->ci_uapsd_en)
+ wiphy->flags |= WIPHY_FLAG_AP_UAPSD;
+
+ /* Modify MAX BSS num according to the desired config value */
+ for (i = 0; i < ARRAY_SIZE(cl_combinations); i++)
+ cl_combinations[i].max_interfaces = conf->ci_max_bss_num;
+ wiphy->iface_combinations = cl_combinations;
+ wiphy->n_iface_combinations = ARRAY_SIZE(cl_combinations);
+
+ /*
+ * hw_scan ops may ask driver to forge active scan request. So the
+ * scan capabs are filled in (the are same as inside mac80211).
+ * However, they are not representing real hw_scan logic, since it will
+ * fallback to the sw_scan for active scan request.
+ **/
+ wiphy->max_scan_ssids = 4;
+ wiphy->max_scan_ie_len = IEEE80211_MAX_DATA_LEN;
+
+ hw->max_rates = IEEE80211_TX_MAX_RATES;
+ hw->max_report_rates = IEEE80211_TX_MAX_RATES;
+ hw->max_rate_tries = 1;
+
+ hw->max_tx_aggregation_subframes = conf->ce_max_agg_size_tx;
+ hw->max_rx_aggregation_subframes = conf->ce_max_agg_size_rx;
+
+ hw->vif_data_size = sizeof(struct cl_vif);
+ hw->sta_data_size = sizeof(struct cl_sta);
+
+ hw->extra_tx_headroom = 0;
+ hw->queues = IEEE80211_MAX_QUEUES;
+ hw->offchannel_tx_hw_queue = CL_HWQ_VO;
+
+ if (!cl_band_is_6g(cl_hw)) {
+ if (conf->ci_ht_rxldpc_en)
+ sband_ht_cap->cap |= IEEE80211_HT_CAP_LDPC_CODING;
+
+ sband_ht_cap->cap |= IEEE80211_HT_CAP_MAX_AMSDU;
+
+ sband_vht_cap->cap |= cl_hw->conf->ci_max_mpdu_len;
+ if (conf->ci_bf_en) {
+ sband_vht_cap->cap |=
+ IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE |
+ IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+ (3 << IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_SHIFT) |
+ (3 << IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT);
+ }
+ }
+
+ if (cl_band_is_5g(cl_hw) || (cl_band_is_24g(cl_hw) && conf->ci_vht_cap_24g)) {
+ if (bw == CHNL_BW_160)
+ sband_vht_cap->cap |= IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ;
+
+ sband_vht_cap->cap |= (conf->ci_max_ampdu_len_exp <<
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT);
+
+ if (conf->ci_vht_rxldpc_en)
+ sband_vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC;
+
+ sband_vht_cap->cap |= IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN;
+ sband_vht_cap->cap |= IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN;
+
+ sband_vht_cap->vht_mcs.rx_mcs_map = cpu_to_le16(0);
+ sband_vht_cap->vht_mcs.tx_mcs_map = cpu_to_le16(0);
+
+ for (i = 0; i < rx_nss; i++)
+ sband_vht_cap->vht_mcs.rx_mcs_map |=
+ cpu_to_le16(cl_vht_mcs_supp_rx(cl_hw, i) << (i * 2));
+
+ for (; i < 8; i++)
+ sband_vht_cap->vht_mcs.rx_mcs_map |=
+ cpu_to_le16(IEEE80211_VHT_MCS_NOT_SUPPORTED << (i * 2));
+
+ for (i = 0; i < tx_nss; i++)
+ sband_vht_cap->vht_mcs.tx_mcs_map |=
+ cpu_to_le16(cl_vht_mcs_supp_tx(cl_hw, i) << (i * 2));
+
+ for (; i < 8; i++)
+ sband_vht_cap->vht_mcs.tx_mcs_map |=
+ cpu_to_le16(IEEE80211_VHT_MCS_NOT_SUPPORTED << (i * 2));
+
+ sband_vht_cap->vht_supported = true;
+ }
+
+ /* 6GHz band supports HE only */
+ if (!cl_band_is_6g(cl_hw))
+ for (i = 0; i < rx_nss; i++)
+ sband_ht_cap->mcs.rx_mask[i] = U8_MAX;
+
+ cl_set_he_capab(cl_hw);
+
+ /* Get channels and power limitations information from ChannelInfo file */
+ cl_chan_info_init(cl_hw);
+
+ if (cl_band_is_6g(cl_hw)) {
+ wiphy->bands[NL80211_BAND_2GHZ] = NULL;
+ wiphy->bands[NL80211_BAND_5GHZ] = NULL;
+ wiphy->bands[NL80211_BAND_6GHZ] = sband;
+ } else if (cl_band_is_5g(cl_hw)) {
+ wiphy->bands[NL80211_BAND_2GHZ] = NULL;
+ wiphy->bands[NL80211_BAND_5GHZ] = sband;
+ wiphy->bands[NL80211_BAND_6GHZ] = NULL;
+ } else {
+ wiphy->bands[NL80211_BAND_2GHZ] = sband;
+ wiphy->bands[NL80211_BAND_5GHZ] = NULL;
+ wiphy->bands[NL80211_BAND_6GHZ] = NULL;
+ }
+}
+
+enum he_pkt_ext_constellations {
+ HE_PKT_EXT_BPSK = 0,
+ HE_PKT_EXT_QPSK,
+ HE_PKT_EXT_16QAM,
+ HE_PKT_EXT_64QAM,
+ HE_PKT_EXT_256QAM,
+ HE_PKT_EXT_1024QAM,
+ HE_PKT_EXT_RESERVED,
+ HE_PKT_EXT_NONE,
+};
+
+static u8 mcs_to_constellation[WRS_MCS_MAX_HE] = {
+ HE_PKT_EXT_BPSK,
+ HE_PKT_EXT_QPSK,
+ HE_PKT_EXT_QPSK,
+ HE_PKT_EXT_16QAM,
+ HE_PKT_EXT_16QAM,
+ HE_PKT_EXT_64QAM,
+ HE_PKT_EXT_64QAM,
+ HE_PKT_EXT_64QAM,
+ HE_PKT_EXT_256QAM,
+ HE_PKT_EXT_256QAM,
+ HE_PKT_EXT_1024QAM,
+ HE_PKT_EXT_1024QAM
+};
+
+#define QAM_THR_1 0
+#define QAM_THR_2 1
+#define QAM_THR_MAX 2
+
+static u8 cl_get_ppe_val(u8 *ppe, u8 ppe_pos_bit)
+{
+ u8 byte_num = ppe_pos_bit / 8;
+ u8 bit_num = ppe_pos_bit % 8;
+ u8 residue_bits;
+ u8 res;
+
+ if (bit_num <= 5)
+ return (ppe[byte_num] >> bit_num) &
+ (BIT(IEEE80211_PPE_THRES_INFO_PPET_SIZE) - 1);
+
+ /*
+ * If bit_num > 5, we have to combine bits with next byte.
+ * Calculate how many bits we need to take from current byte (called
+ * here "residue_bits"), and add them to bits from next byte.
+ */
+ residue_bits = 8 - bit_num;
+
+ res = (ppe[byte_num + 1] &
+ (BIT(IEEE80211_PPE_THRES_INFO_PPET_SIZE - residue_bits) - 1)) <<
+ residue_bits;
+ res += (ppe[byte_num] >> bit_num) & (BIT(residue_bits) - 1);
+
+ return res;
+}
+
+static void cl_set_fixed_ppe_val(u8 pe_dur[CHNL_BW_MAX][WRS_MCS_MAX_HE], u8 dur)
+{
+ u8 val = ((dur << 6) | (dur << 4) | (dur << 2) | dur);
+
+ memset(pe_dur, val, CHNL_BW_MAX * WRS_MCS_MAX_HE);
+}
+
+void cl_cap_ppe_duration(struct cl_hw *cl_hw, struct ieee80211_sta *sta,
+ u8 pe_dur[CHNL_BW_MAX][WRS_MCS_MAX_HE])
+{
+ /* Force NVRAM parameter */
+ if (cl_hw->conf->ci_pe_duration <= PPE_16US) {
+ cl_set_fixed_ppe_val(pe_dur, cl_hw->conf->ci_pe_duration);
+ return;
+ }
+
+ /*
+ * If STA sets the PPE Threshold Present subfield to 0,
+ * the value should be set according to the Nominal Packet Padding subfield
+ */
+ if ((sta->he_cap.he_cap_elem.phy_cap_info[6] &
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) == 0) {
+ switch (sta->he_cap.he_cap_elem.phy_cap_info[9] &
+ IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK) {
+ case IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_0US:
+ cl_set_fixed_ppe_val(pe_dur, PPE_0US);
+ break;
+ case IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_8US:
+ cl_set_fixed_ppe_val(pe_dur, PPE_8US);
+ break;
+ case IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US:
+ default:
+ cl_set_fixed_ppe_val(pe_dur, PPE_16US);
+ break;
+ }
+
+ return;
+ }
+
+ /*
+ * struct iwl_he_pkt_ext - QAM thresholds
+ * The required PPE is set via HE Capabilities IE, per Nss x BW x MCS
+ * The IE is organized in the following way:
+ * Support for Nss x BW (or RU) matrix:
+ * (0=SISO, 1=MIMO2) x (0-20MHz, 1-40MHz, 2-80MHz, 3-160MHz)
+ * Each entry contains 2 QAM thresholds for 8us and 16us:
+ * 0=BPSK, 1=QPSK, 2=16QAM, 3=64QAM, 4=256QAM, 5=1024QAM, 6=RES, 7=NONE
+ * i.e. QAM_th1 < QAM_th2 such if TX uses QAM_tx:
+ * QAM_tx < QAM_th1 --> PPE=0us
+ * QAM_th1 <= QAM_tx < QAM_th2 --> PPE=8us
+ * QAM_th2 <= QAM_tx --> PPE=16us
+ * @pkt_ext_qam_th: QAM thresholds
+ * For each Nss/Bw define 2 QAM thrsholds (0..5)
+ * For rates below the low_th, no need for PPE
+ * For rates between low_th and high_th, need 8us PPE
+ * For rates equal or higher then the high_th, need 16us PPE
+ * Nss (0-siso, 1-mimo2) x BW (0-20MHz, 1-40MHz, 2-80MHz, 3-160MHz) x
+ * (0-low_th, 1-high_th)
+ */
+ u8 pkt_ext_qam_th[WRS_SS_MAX][CHNL_BW_MAX][QAM_THR_MAX];
+
+ /* If PPE Thresholds exist, parse them into a FW-familiar format. */
+ u8 nss = (sta->he_cap.ppe_thres[0] & IEEE80211_PPE_THRES_NSS_MASK) + 1;
+ u8 ru_index_bitmap = u32_get_bits(sta->he_cap.ppe_thres[0],
+ IEEE80211_PPE_THRES_RU_INDEX_BITMASK_MASK);
+ u8 *ppe = &sta->he_cap.ppe_thres[0];
+ u8 ppe_pos_bit = 7; /* Starting after PPE header */
+ u8 bw, ss, mcs, constellation;
+
+ if (nss > WRS_SS_MAX)
+ nss = WRS_SS_MAX;
+
+ for (ss = 0; ss < nss; ss++) {
+ u8 ru_index_tmp = ru_index_bitmap << 1;
+
+ for (bw = 0; bw <= cl_hw->bw; bw++) {
+ ru_index_tmp >>= 1;
+ if (!(ru_index_tmp & 1))
+ continue;
+
+ pkt_ext_qam_th[ss][bw][QAM_THR_2] = cl_get_ppe_val(ppe, ppe_pos_bit);
+ ppe_pos_bit += IEEE80211_PPE_THRES_INFO_PPET_SIZE;
+ pkt_ext_qam_th[ss][bw][QAM_THR_1] = cl_get_ppe_val(ppe, ppe_pos_bit);
+ ppe_pos_bit += IEEE80211_PPE_THRES_INFO_PPET_SIZE;
+ }
+ }
+
+ /* Reset PE duration before filling it */
+ memset(pe_dur, 0, CHNL_BW_MAX * WRS_MCS_MAX_HE);
+
+ for (ss = 0; ss < nss; ss++) {
+ for (bw = 0; bw <= cl_hw->bw; bw++) {
+ for (mcs = 0; mcs < WRS_MCS_MAX_HE; mcs++) {
+ constellation = mcs_to_constellation[mcs];
+
+ if (constellation < pkt_ext_qam_th[ss][bw][QAM_THR_1])
+ pe_dur[bw][mcs] |= (PPE_0US << (ss * 2));
+ else if (constellation < pkt_ext_qam_th[ss][bw][QAM_THR_2])
+ pe_dur[bw][mcs] |= (PPE_8US << (ss * 2));
+ else
+ pe_dur[bw][mcs] |= (PPE_16US << (ss * 2));
+ }
+ }
+ }
+}
+
+static void cl_ops_tx_agg(struct cl_hw *cl_hw,
+ struct sk_buff *skb,
+ struct ieee80211_tx_info *tx_info,
+ struct cl_sta *cl_sta)
+{
+ cl_hw->tx_packet_cntr.forward.from_mac_agg++;
+
+ if (!cl_sta) {
+ struct cl_vif *cl_vif =
+ (struct cl_vif *)tx_info->control.vif->drv_priv;
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+ u8 ac = tid_to_ac[tid];
+
+ kfree_skb(skb);
+ cl_dbg_err(cl_hw, "cl_sta null in agg packet\n");
+ cl_hw->tx_packet_cntr.drop.sta_null_in_agg++;
+ cl_vif->trfc_cntrs[ac].tx_errors++;
+ return;
+ }
+
+ /* AMSDU in HW can work only with header conversion. */
+ tx_info->control.flags &= ~IEEE80211_TX_CTRL_AMSDU;
+ cl_tx_agg(cl_hw, cl_sta, skb, false, true);
+}
+
+static void cl_ops_tx_single(struct cl_hw *cl_hw,
+ struct sk_buff *skb,
+ struct ieee80211_tx_info *tx_info,
+ struct cl_sta *cl_sta,
+ struct ieee80211_sta *sta)
+{
+ bool is_vns = cl_vns_is_very_near(cl_hw, cl_sta, skb);
+
+ cl_hw->tx_packet_cntr.forward.from_mac_single++;
+ if (cl_hw->tx_db.block_prob_resp) {
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+
+ if (ieee80211_is_probe_resp(hdr->frame_control)) {
+ struct cl_vif *cl_vif = NETDEV_TO_CL_VIF(skb->dev);
+ u8 ac = cl_vif->vif->hw_queue[skb_get_queue_mapping(skb)];
+
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_hw->tx_packet_cntr.drop.probe_response++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+ }
+
+ if (sta) {
+ u32 sta_vht_cap = sta->vht_cap.cap;
+ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
+
+ if (!(sta_vht_cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK))
+ goto out_tx;
+
+ if (ieee80211_is_assoc_resp(mgmt->frame_control)) {
+ int len = skb->len - (mgmt->u.assoc_resp.variable - skb->data);
+ const u8 *vht_cap_addr = cfg80211_find_ie(WLAN_EID_VHT_CAPABILITY,
+ mgmt->u.assoc_resp.variable,
+ len);
+
+ if (vht_cap_addr) {
+ struct ieee80211_vht_cap *vht_cap =
+ (struct ieee80211_vht_cap *)(2 + vht_cap_addr);
+
+ vht_cap->vht_cap_info &=
+ ~(cpu_to_le32(IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK |
+ IEEE80211_VHT_CAP_SHORT_GI_160));
+ }
+ }
+ }
+
+out_tx:
+ cl_tx_single(cl_hw, cl_sta, skb, is_vns, true);
+}
+
+static u16 cl_ops_recalc_smallest_tbtt(struct cl_hw *cl_hw)
+{
+ struct wiphy *wiphy = cl_hw->hw->wiphy;
+ u8 cmb_idx = wiphy->n_iface_combinations - 1;
+ struct cl_vif *cl_vif = NULL;
+ u16 ret = 0;
+ u8 topology = cl_hw_get_iface_conf(cl_hw);
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list) {
+ if (cl_vif->vif->type == NL80211_IFTYPE_STATION)
+ continue;
+ else if (ret == 0)
+ ret = cl_vif->vif->bss_conf.beacon_int;
+ else if (cl_vif->vif->bss_conf.beacon_int)
+ ret = min(ret, cl_vif->vif->bss_conf.beacon_int);
+ }
+ read_unlock_bh(&cl_hw->vif_db.lock);
+
+ if (ret == 0) {
+ WARN_ONCE(topology != CL_IFCONF_STA && topology != CL_IFCONF_MESH_ONLY,
+ "invalid smallest beacon interval");
+ return wiphy->iface_combinations[cmb_idx].beacon_int_min_gcd;
+ }
+ return ret;
+}
+
+static void cl_ops_set_mesh_tbtt(struct cl_hw *cl_hw, u16 this_beacon_int,
+ u16 smallest_beacon_int)
+{
+ u16 div = this_beacon_int / smallest_beacon_int;
+
+ cl_hw->mesh_tbtt_div = (div > 0) ? div : 1;
+}
+
+void cl_ops_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control, struct sk_buff *skb)
+{
+ /*
+ * Almost all traffic passing here is singles.
+ * Only when opening a BA session some packets with
+ * IEEE80211_TX_CTL_AMPDU set can pass here.
+ * All skbs passing here did header conversion.
+ */
+ struct cl_hw *cl_hw = (struct cl_hw *)hw->priv;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_sta *sta = control->sta;
+ struct cl_sta *cl_sta = NULL;
+
+ if (sta) {
+ cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+
+ /*
+ * Prior to STA connection sta can be set but we don't
+ * want cl_sta to be used since it's not initialized yet
+ */
+ if (cl_sta->sta_idx == STA_IDX_INVALID)
+ cl_sta = NULL;
+ }
+
+ if (cl_recovery_in_progress(cl_hw)) {
+ cl_hw->tx_packet_cntr.drop.in_recovery++;
+
+ if (cl_sta) {
+ struct cl_vif *cl_vif = cl_sta->cl_vif;
+
+ if (cl_vif) {
+ struct ieee80211_vif *vif = cl_vif->vif;
+ u8 hw_queue;
+
+ if (vif) {
+ hw_queue = vif->hw_queue[skb_get_queue_mapping(skb)];
+ cl_vif->trfc_cntrs[hw_queue].tx_dropped++;
+ }
+ }
+ }
+
+ cl_tx_drop_skb(skb);
+ return;
+ }
+
+ if (cl_sta && (tx_info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP))
+ goto fast_tx;
+
+ if (tx_info->flags & IEEE80211_TX_CTL_AMPDU)
+ cl_ops_tx_agg(cl_hw, skb, tx_info, cl_sta);
+ else
+ cl_ops_tx_single(cl_hw, skb, tx_info, cl_sta, sta);
+
+ return;
+
+fast_tx:
+ if (tx_info->flags & IEEE80211_TX_CTL_AMPDU)
+ cl_tx_fast_agg(cl_hw, cl_sta, skb, true);
+ else
+ cl_tx_fast_single(cl_hw, cl_sta, skb, true);
+}
+
+int cl_ops_start(struct ieee80211_hw *hw)
+{
+ /*
+ * Called before the first netdevice attached to the hardware
+ * is enabled. This should turn on the hardware and must turn on
+ * frame reception (for possibly enabled monitor interfaces.)
+ * Returns negative error codes, these may be seen in userspace,
+ * or zero.
+ * When the device is started it should not have a MAC address
+ * to avoid acknowledging frames before a non-monitor device
+ * is added.
+ * Must be implemented and can sleep.
+ * It does not return until the firmware is up and running.
+ */
+ int error = 0;
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ struct cl_hw *cl_hw_other = cl_hw_other_tcv(cl_hw);
+
+ if (!cl_hw->ipc_env) {
+ CL_DBG_ERROR(cl_hw, "ipc_env is NULL!\n");
+ return -ENODEV;
+ }
+
+ /* Exits if device is already started */
+ if (WARN_ON(test_bit(CL_DEV_STARTED, &cl_hw->drv_flags)))
+ return -EBUSY;
+
+ /* Device is now started.
+ * Set CL_DEV_STARTED bit before the calls to other messages sent to
+ * firmware, to prevent them from being blocked*
+ */
+
+ set_bit(CL_DEV_STARTED, &cl_hw->drv_flags);
+
+ if (!cl_recovery_in_progress(cl_hw)) {
+ /* Read version */
+ error = cl_version_update(cl_hw);
+ if (error)
+ return error;
+
+ error = cl_temperature_diff_e2p_read(cl_hw);
+ if (error)
+ return error;
+ }
+
+ /* Set firmware debug module filter */
+ error = cl_msg_tx_dbg_set_ce_mod_filter(cl_hw, conf->ci_fw_dbg_module);
+ if (error)
+ return error;
+
+ /* Set firmware debug severity level */
+ error = cl_msg_tx_dbg_set_sev_filter(cl_hw, conf->ci_fw_dbg_severity);
+ if (error)
+ return error;
+
+ /* Set firmware rate fallbacks */
+ error = cl_msg_tx_set_rate_fallback(cl_hw);
+ if (error)
+ return error;
+
+ error = cl_msg_tx_ndp_tx_control(cl_hw,
+ conf->ci_sensing_ndp_tx_chain_mask,
+ conf->ci_sensing_ndp_tx_bw,
+ conf->ci_sensing_ndp_tx_format,
+ conf->ci_sensing_ndp_tx_num_ltf);
+ if (error)
+ return error;
+
+ /* Set default, multicast, broadcast rate */
+ cl_rate_ctrl_set_default(cl_hw);
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ cl_dyn_mcast_rate_set(cl_hw);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_rate_set(cl_hw, 0);
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+
+ ieee80211_wake_queues(hw);
+
+ clear_bit(CL_DEV_INIT, &cl_hw->drv_flags);
+
+ cl_edca_hw_conf(cl_hw);
+
+ if (!cl_hw->chip->conf->ce_calib_runtime_en) {
+ cl_calib_dcoc_init_calibration(cl_hw);
+
+ if (cl_hw->chip->conf->ce_production_mode)
+ cl_calib_iq_init_production(cl_hw);
+ else if (!cl_hw_other || test_bit(CL_DEV_STARTED, &cl_hw_other->drv_flags))
+ cl_calib_iq_init_calibration(cl_hw);
+ }
+
+ return error;
+}
+
+void cl_ops_stop(struct ieee80211_hw *hw)
+{
+ /*
+ * Called after last netdevice attached to the hardware
+ * is disabled. This should turn off the hardware (at least
+ * it must turn off frame reception.)
+ * May be called right after add_interface if that rejects
+ * an interface. If you added any work onto the mac80211 workqueue
+ * you should ensure to cancel it on this callback.
+ * Must be implemented and can sleep.
+ */
+ struct cl_hw *cl_hw = hw->priv;
+
+ /* Stop mac80211 queues */
+ ieee80211_stop_queues(hw);
+
+ /* Go to idle */
+ cl_msg_tx_set_idle(cl_hw, MAC_IDLE_SYNC, true);
+
+ /*
+ * Clear CL_DEV_STARTED to prevent message to be sent (besides reset and start).
+ * It also blocks transmission of new packets
+ */
+ clear_bit(CL_DEV_STARTED, &cl_hw->drv_flags);
+
+ cl_hw->num_ap_started = 0;
+ cl_hw->channel = 0;
+ cl_hw->radio_status = RADIO_STATUS_OFF;
+}
+
+static int cl_add_interface_to_firmware(struct cl_hw *cl_hw,
+ struct ieee80211_vif *vif, u8 vif_index)
+{
+ struct mm_add_if_cfm *add_if_cfm;
+ int ret = 0;
+
+ /* Forward the information to the firmware */
+ ret = cl_msg_tx_add_if(cl_hw, vif, vif_index);
+ if (ret)
+ return ret;
+
+ add_if_cfm = (struct mm_add_if_cfm *)(cl_hw->msg_cfm_params[MM_ADD_IF_CFM]);
+ if (!add_if_cfm)
+ return -ENOMSG;
+
+ if (add_if_cfm->status != 0) {
+ cl_dbg_verbose(cl_hw, "Status Error (%u)\n", add_if_cfm->status);
+ ret = -EIO;
+ }
+
+ cl_msg_tx_free_cfm_params(cl_hw, MM_ADD_IF_CFM);
+
+ return ret;
+}
+
+static enum cl_iface_conf cl_recalc_hw_iface(struct cl_hw *cl_hw)
+{
+ struct cl_vif *cl_vif = NULL;
+ u8 num_ap = 0, num_sta = 0, num_mp = 0;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list) {
+ switch (cl_vif->vif->type) {
+ case NL80211_IFTYPE_AP:
+ num_ap++;
+ break;
+ case NL80211_IFTYPE_STATION:
+ num_sta++;
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+ num_mp++;
+ break;
+ default:
+ read_unlock_bh(&cl_hw->vif_db.lock);
+ return CL_IFCONF_MAX;
+ }
+ }
+ read_unlock_bh(&cl_hw->vif_db.lock);
+
+ if (num_ap > 0 && num_sta == 0 && num_mp == 0)
+ return CL_IFCONF_AP;
+ if (num_ap == 0 && num_sta == 1 && num_mp == 0)
+ return CL_IFCONF_STA;
+ if (num_ap == 1 && num_sta == 1 && num_mp == 0)
+ return CL_IFCONF_REPEATER;
+ if (num_ap > 0 && num_sta == 0 && num_mp == 1)
+ return CL_IFCONF_MESH_AP;
+ if (num_ap == 0 && num_sta == 0 && num_mp == 1)
+ return CL_IFCONF_MESH_ONLY;
+
+ return CL_IFCONF_MAX;
+}
+
+int cl_ops_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ /*
+ * Called when a netdevice attached to the hardware is
+ * enabled. Because it is not called for monitor mode devices, start
+ * and stop must be implemented.
+ * The driver should perform any initialization it needs before
+ * the device can be enabled. The initial configuration for the
+ * interface is given in the conf parameter.
+ * The callback may refuse to add an interface by returning a
+ * negative error code (which will be seen in userspace.)
+ * Must be implemented and can sleep.
+ */
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+ struct wireless_dev *wdev = ieee80211_vif_to_wdev(vif);
+ struct net_device *dev = NULL;
+ u8 ac;
+
+ if (!wdev)
+ return -ENODEV;
+
+ dev = wdev->netdev;
+ if (!dev)
+ return -ENODEV;
+
+ /*
+ * In recovery just send the message to firmware and exit
+ * (also make sure cl_vif already exists).
+ */
+ if (cl_recovery_in_progress(cl_hw) && cl_vif_get_by_dev(cl_hw, dev))
+ return cl_add_interface_to_firmware(cl_hw, vif, cl_vif->vif_index);
+
+ cl_vif->cl_hw = cl_hw;
+ cl_vif->vif = vif;
+ cl_vif->dev = dev;
+ cl_vif->vif_index = cl_mac_addr_find_idx(cl_hw, vif->addr);
+
+ /* MAC address not found - invalid address */
+ if (cl_vif->vif_index == BSS_INVALID_IDX) {
+ cl_dbg_err(cl_hw, "Error: Invalid MAC address %pM for vif %s\n",
+ vif->addr, dev->name);
+
+ return -EINVAL;
+ }
+
+ if (chip->conf->ce_production_mode || vif->type == NL80211_IFTYPE_STATION)
+ cl_vif->tx_en = true;
+
+ cl_vif_key_init(cl_vif);
+
+ if (cl_add_interface_to_firmware(cl_hw, vif, cl_vif->vif_index))
+ return -EINVAL;
+
+ cl_vif->conn_data = kzalloc(sizeof(*cl_vif->conn_data), GFP_KERNEL);
+ if (!cl_vif->conn_data) {
+ cl_dbg_verbose(cl_hw, "Memory allocation for conn_data failed !!!\n");
+ return -ENOMEM;
+ }
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+ vif->cab_queue = CL_HWQ_VO;
+
+ cl_vif_add(cl_hw, cl_vif);
+ cl_hw_set_iface_conf(cl_hw, cl_recalc_hw_iface(cl_hw));
+
+ for (ac = 0; ac < AC_MAX; ac++)
+ vif->hw_queue[ac] = cl_ac2hwq[ac];
+
+ if (cl_radio_is_on(cl_hw) && vif->type == NL80211_IFTYPE_AP)
+ cl_vif->tx_en = true;
+
+ /* Set active state in station mode after ifconfig down and up */
+ if (cl_hw->conf->ce_listener_en)
+ cl_radio_on(cl_hw);
+ else if (cl_radio_is_on(cl_hw) && vif->type == NL80211_IFTYPE_STATION)
+ cl_msg_tx_set_idle(cl_hw, MAC_ACTIVE, true);
+
+ if (vif->type == NL80211_IFTYPE_MESH_POINT) {
+ tasklet_init(&cl_hw->tx_mesh_bcn_task, cl_tx_bcn_mesh_task,
+ (unsigned long)cl_vif);
+ cl_radio_on(cl_hw);
+ cl_vif->tx_en = true;
+ cl_vif->mesh_basic_rates = cl_cap_set_mesh_basic_rates(cl_hw);
+ }
+
+ return 0;
+}
+
+void cl_ops_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ /*
+ * Notifies a driver that an interface is going down.
+ * The stop callback is called after this if it is the last interface
+ * and no monitor interfaces are present.
+ * When all interfaces are removed, the MAC address in the hardware
+ * must be cleared so the device no longer acknowledges packets,
+ * the mac_addr member of the conf structure is, however, set to the
+ * MAC address of the device going away.
+ * Hence, this callback must be implemented. It can sleep.
+ */
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+
+ if (vif->type == NL80211_IFTYPE_MESH_POINT)
+ tasklet_kill(&cl_hw->tx_mesh_bcn_task);
+
+ if (!cl_recovery_in_progress(cl_hw)) {
+ kfree(cl_vif->conn_data);
+ cl_vif_remove(cl_hw, cl_vif);
+ cl_msg_tx_remove_if(cl_hw, cl_vif->vif_index);
+ } else {
+ cl_vif_remove(cl_hw, cl_vif);
+ }
+ cl_hw_set_iface_conf(cl_hw, cl_recalc_hw_iface(cl_hw));
+
+ cl_vif_key_deinit(cl_vif);
+
+ cl_vif->cl_hw = NULL;
+ cl_vif->vif = NULL;
+ cl_vif->dev = NULL;
+}
+
+static int cl_ops_conf_change_channel(struct ieee80211_hw *hw)
+{
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_chip *chip = cl_hw->chip;
+ struct cfg80211_chan_def *chandef = &hw->conf.chandef;
+ enum nl80211_chan_width width = chandef->width;
+ u32 primary = chandef->chan->center_freq;
+ u32 center = chandef->center_freq1;
+ u32 channel = ieee80211_frequency_to_channel(primary);
+ u8 bw = cl_width_to_bw(width);
+ int ret = 0;
+
+ if (!test_bit(CL_DEV_STARTED, &cl_hw->drv_flags))
+ return 0;
+
+ /* WA: for the first set-channel in production mode use the nvram values */
+ if (cl_hw_is_prod_or_listener(cl_hw) || !IS_REAL_PHY(chip)) {
+ ret = cl_chandef_get_default(cl_hw, &channel, &bw,
+ &width, &primary, &center);
+
+ if (ret != 0)
+ return ret;
+ }
+
+ cl_dbg_trace(cl_hw,
+ "channel(%u), primary(%u), center(%u), width(%u), bw(%u)\n",
+ channel, primary, center, width, bw);
+
+ if (cl_hw->channel == channel &&
+ cl_hw->bw == bw &&
+ cl_hw->primary_freq == primary &&
+ cl_hw->center_freq == center)
+ goto dfs_cac;
+
+ /*
+ * Flush the pending data to ensure that we will finish the pending
+ * transmissions before changing the channel
+ */
+ if (IS_REAL_PHY(chip))
+ cl_ops_flush(hw, NULL, -1, false);
+
+ if (cl_hw->chip->conf->ce_calib_runtime_en)
+ ret = cl_calib_runtime_and_switch_channel(cl_hw, channel, bw, primary, center);
+ else
+ ret = cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center,
+ CL_CALIB_PARAMS_DEFAULT_STRUCT);
+ if (ret)
+ return -EIO;
+
+ /**
+ * Set preffered channel type to HT+/- based on current hapd
+ * configuration.
+ */
+ if (cl_band_is_24g(cl_hw)) {
+ u8 ct = cfg80211_get_chandef_type(&hw->conf.chandef);
+
+ switch (ct) {
+ case NL80211_CHAN_HT40PLUS:
+ case NL80211_CHAN_HT40MINUS:
+ if (ct != cl_hw->ht40_preffered_ch_type) {
+ cl_dbg_info(cl_hw, "HT40 preffered channel type=%s\n",
+ ct == NL80211_CHAN_HT40PLUS ? "HT+" : "HT-");
+ cl_hw->ht40_preffered_ch_type = ct;
+ }
+ }
+ }
+
+ cl_wrs_api_bss_set_bw(cl_hw, bw);
+
+dfs_cac:
+ /*
+ * TODO: This callback is being spawned even in STA mode, moreover,
+ * "start_ap" comes later - it is unclear whether we are an AP at this
+ * stage. Likely, may be solved by moving "force_cac_*" states to beginning
+ * of "start_ap", but the request should stay in current callback
+ */
+ if (!cl_band_is_5g(cl_hw))
+ return 0;
+
+ /*
+ * Radar listening may occur at DFS channels during in-service mode,
+ * so CAC may clear the channels, but radar listening should be
+ * still active, and should start it as soon as we can.
+ */
+ if (hw->conf.radar_enabled) {
+ /* If channel policy demans to be in CAC - need to request it */
+ if (!cl_dfs_is_in_cac(cl_hw) &&
+ chandef->chan->dfs_state == NL80211_DFS_USABLE)
+ cl_dfs_request_cac(cl_hw, true);
+
+ if (!cl_dfs_radar_listening(cl_hw))
+ cl_dfs_radar_listen_start(cl_hw);
+ } else {
+ /*
+ * No sense to continue be in silent mode if the channel was
+ * cleared
+ */
+
+ if (cl_dfs_is_in_cac(cl_hw) &&
+ chandef->chan->dfs_state == NL80211_DFS_AVAILABLE)
+ cl_dfs_request_cac(cl_hw, false);
+
+ if (cl_dfs_radar_listening(cl_hw))
+ cl_dfs_radar_listen_end(cl_hw);
+ }
+ /*
+ * We have just finished channel switch.
+ * Now, check what to do with CAC.
+ */
+ if (cl_dfs_requested_cac(cl_hw))
+ cl_dfs_force_cac_start(cl_hw);
+ else if (cl_dfs_is_in_cac(cl_hw))
+ cl_dfs_force_cac_end(cl_hw);
+
+ return 0;
+}
+
+int cl_ops_config(struct ieee80211_hw *hw, u32 changed)
+{
+ /*
+ * Handler for configuration requests. IEEE 802.11 code calls this
+ * function to change hardware configuration, e.g., channel.
+ * This function should never fail but returns a negative error code
+ * if it does. The callback can sleep
+ */
+ int error = 0;
+
+ if (changed & IEEE80211_CONF_CHANGE_CHANNEL)
+ error = cl_ops_conf_change_channel(hw);
+
+ return error;
+}
+
+/*
+ * @bss_info_changed: Handler for configuration requests related to BSS
+ * parameters that may vary during BSS's lifespan, and may affect low
+ * level driver (e.g. assoc/disassoc status, erp parameters).
+ * This function should not be used if no BSS has been set, unless
+ * for association indication. The @changed parameter indicates which
+ * of the bss parameters has changed when a call is made. The callback
+ * can sleep.
+ */
+void cl_ops_bss_info_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *info,
+ u32 changed)
+{
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+
+ if (changed & BSS_CHANGED_ASSOC) {
+ if (cl_msg_tx_set_associated(cl_hw, info))
+ return;
+ }
+
+ if (changed & BSS_CHANGED_BSSID) {
+ if (cl_msg_tx_set_bssid(cl_hw, info->bssid, cl_vif->vif_index))
+ return;
+ }
+
+ if (changed & BSS_CHANGED_BEACON_INT) {
+ u16 smallest_int = cl_ops_recalc_smallest_tbtt(cl_hw);
+
+ cl_hw->smallest_beacon_int = smallest_int;
+
+ if (vif->type == NL80211_IFTYPE_AP ||
+ cl_hw_get_iface_conf(cl_hw) == CL_IFCONF_MESH_ONLY) {
+ if (cl_msg_tx_set_beacon_int(cl_hw, info->beacon_int,
+ cl_vif->vif_index))
+ return;
+ if (cl_msg_tx_dtim(cl_hw, info->dtim_period))
+ return;
+ }
+
+ if (vif->type == NL80211_IFTYPE_MESH_POINT &&
+ cl_hw_get_iface_conf(cl_hw) == CL_IFCONF_MESH_AP) {
+ cl_ops_set_mesh_tbtt(cl_hw, info->beacon_int, smallest_int);
+ }
+ }
+
+ if (changed & BSS_CHANGED_BASIC_RATES) {
+ int shift = hw->wiphy->bands[hw->conf.chandef.chan->band]->bitrates[0].hw_value;
+
+ if (vif->type == NL80211_IFTYPE_MESH_POINT)
+ if (cl_vif->mesh_basic_rates)
+ info->basic_rates = cl_vif->mesh_basic_rates;
+
+ if (cl_msg_tx_set_basic_rates(cl_hw, info->basic_rates << shift))
+ return;
+ /* TODO: check if cl_msg_tx_set_mode() should be called */
+ }
+
+ if (changed & BSS_CHANGED_ERP_SLOT) {
+ /*
+ * We must be in 11g mode here
+ * TODO: we can add a check on the mode
+ */
+ if (cl_msg_tx_set_slottime(cl_hw, info->use_short_slot))
+ return;
+ }
+
+ if (changed & BSS_CHANGED_BANDWIDTH)
+ cl_wrs_api_bss_set_bw(cl_hw, cl_width_to_bw(info->chandef.width));
+
+ if (changed & BSS_CHANGED_TXPOWER) {
+ if (info->txpower_type == NL80211_TX_POWER_FIXED) {
+ cl_hw->new_tx_power = info->txpower;
+ cl_power_tables_update(cl_hw, &cl_hw->phy_data_info.data->pwr_tables);
+ cl_msg_tx_refresh_power(cl_hw);
+ }
+ }
+
+ if (changed & BSS_CHANGED_BEACON) {
+ struct beacon_data *beacon = NULL;
+ struct ieee80211_sub_if_data *sdata =
+ container_of(vif, struct ieee80211_sub_if_data, vif);
+ struct ieee80211_ht_cap *ht_cap = NULL;
+ struct ieee80211_vht_cap *vht_cap = NULL;
+ struct ieee80211_he_cap_elem *he_cap = NULL;
+ bool sgi_en = false;
+ u8 hw_mode = cl_hw->hw_mode;
+ enum cl_wireless_mode wireless_mode = cl_hw->wireless_mode;
+
+ rcu_read_lock();
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+ beacon = rcu_dereference(sdata->u.ap.beacon);
+ else if (ieee80211_vif_is_mesh(&sdata->vif))
+ beacon = rcu_dereference(sdata->u.mesh.beacon);
+
+ if (beacon) {
+ size_t ies_len = beacon->tail_len;
+ const u8 *ies = beacon->tail;
+ const u8 *cap = NULL;
+ int var_offset = offsetof(struct ieee80211_mgmt, u.beacon.variable);
+ int len = beacon->head_len - var_offset;
+ const u8 *var_pos = beacon->head + var_offset;
+ const u8 *rate_ie = NULL;
+
+ cl_vif->wmm_enabled = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ WLAN_OUI_TYPE_MICROSOFT_WMM,
+ ies,
+ ies_len);
+ cl_dbg_info(cl_hw, "vif=%d wmm_enabled=%d\n",
+ cl_vif->vif_index,
+ cl_vif->wmm_enabled);
+
+ cap = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies, ies_len);
+ if (cap && cap[1] >= sizeof(*ht_cap)) {
+ ht_cap = (void *)(cap + 2);
+ sgi_en |= (le16_to_cpu(ht_cap->cap_info) &
+ IEEE80211_HT_CAP_SGI_20) ||
+ (le16_to_cpu(ht_cap->cap_info) &
+ IEEE80211_HT_CAP_SGI_40);
+ }
+
+ cap = cfg80211_find_ie(WLAN_EID_VHT_CAPABILITY, ies, ies_len);
+ if (cap && cap[1] >= sizeof(*vht_cap)) {
+ vht_cap = (void *)(cap + 2);
+ sgi_en |= (le32_to_cpu(vht_cap->vht_cap_info) &
+ IEEE80211_VHT_CAP_SHORT_GI_80) ||
+ (le32_to_cpu(vht_cap->vht_cap_info) &
+ IEEE80211_VHT_CAP_SHORT_GI_160);
+ }
+
+ cap = cfg80211_find_ext_ie(WLAN_EID_EXT_HE_CAPABILITY, ies, ies_len);
+ if (cap && cap[1] >= sizeof(*he_cap) + 1)
+ he_cap = (void *)(cap + 3);
+
+ rate_ie = cfg80211_find_ie(WLAN_EID_SUPP_RATES, var_pos, len);
+ if (rate_ie) {
+ if (cl_band_is_24g(cl_hw))
+ if (cl_is_valid_g_rates(rate_ie))
+ hw_mode = cl_hw->conf->ci_cck_in_hw_mode ?
+ HW_MODE_BG : HW_MODE_G;
+ else
+ hw_mode = HW_MODE_B;
+ else
+ hw_mode = HW_MODE_A;
+ }
+ } else {
+ cl_dbg_warn(cl_hw, "beacon_data not set!\n");
+ }
+
+ rcu_read_unlock();
+
+ /*
+ * FIXME: 1. WRS has no VIF-specific capabs settings.
+ * 2. WRS has no BW-specific SGI configuration support.
+ **/
+
+ /* If found any capabs info and state is different - update sgi */
+ if ((ht_cap || vht_cap) && (cl_wrs_api_bss_is_sgi_en(cl_hw) != sgi_en))
+ cl_wrs_api_bss_set_sgi(cl_hw, sgi_en);
+
+ if (hw_mode != cl_hw->hw_mode) {
+ cl_hw->hw_mode = hw_mode;
+ sgi_en =
+ (ht_cap || vht_cap) ? sgi_en : cl_hw->conf->ci_short_guard_interval;
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_update(cl_hw);
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ cl_dyn_mcast_update(cl_hw);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+ cl_wrs_api_bss_capab_update(cl_hw, cl_hw->bw, sgi_en);
+ }
+
+ wireless_mode = cl_recalc_wireless_mode(cl_hw, !!ht_cap, !!vht_cap, !!he_cap);
+ if (wireless_mode != cl_hw->wireless_mode) {
+ sgi_en =
+ (ht_cap || vht_cap) ? sgi_en : cl_hw->conf->ci_short_guard_interval;
+ cl_hw->wireless_mode = wireless_mode;
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ cl_dyn_mcast_update(cl_hw);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+ cl_wrs_api_bss_capab_update(cl_hw, cl_hw->bw, sgi_en);
+ }
+ }
+}
+
+int cl_ops_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ set_bit(CL_DEV_AP_STARTED, &cl_hw->drv_flags);
+
+ cl_hw->num_ap_started++;
+ if (cl_hw->conf->ce_radio_on) {
+ if (cl_radio_is_off(cl_hw))
+ cl_radio_on(cl_hw);
+
+ return 0;
+ }
+
+ /*
+ * Set active state when cl_ops_start_ap() is called not during first driver start
+ * but rather after removing all interfaces and then doing up again to one interface.
+ */
+ if (cl_radio_is_on(cl_hw) && !cl_recovery_in_progress(cl_hw))
+ cl_msg_tx_set_idle(cl_hw, MAC_ACTIVE, true);
+
+ return 0;
+}
+
+void cl_ops_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ /*
+ * Unset CL_DEV_AP_STARTED in order to avoid
+ * calling cl_ops_conf_change_channel after unloading the driver
+ */
+ clear_bit(CL_DEV_AP_STARTED, &cl_hw->drv_flags);
+
+ cl_hw->num_ap_started--;
+
+ if (!cl_hw->num_ap_started)
+ cl_hw->channel = 0;
+}
+
+u64 cl_ops_prepare_multicast(struct ieee80211_hw *hw, struct netdev_hw_addr_list *mc_list)
+{
+ return netdev_hw_addr_list_count(mc_list);
+}
+
+void cl_ops_configure_filter(struct ieee80211_hw *hw, u32 changed_flags,
+ u32 *total_flags, u64 multicast)
+{
+ /*
+ * configure_filter: Configure the device's RX filter.
+ * See the section "Frame filtering" for more information.
+ * This callback must be implemented and can sleep.
+ */
+ struct cl_hw *cl_hw = hw->priv;
+
+ cl_dbg_trace(cl_hw, "total_flags = 0x%08x\n", *total_flags);
+
+ /*
+ * Reset our filter flags since our start/stop ops reset
+ * the programmed settings
+ */
+ if (!test_bit(CL_DEV_STARTED, &cl_hw->drv_flags)) {
+ *total_flags = 0;
+ return;
+ }
+
+ if (multicast)
+ *total_flags |= FIF_ALLMULTI;
+ else
+ *total_flags &= ~FIF_ALLMULTI;
+
+ /* TODO: optimize with changed_flags vs multicast */
+ cl_msg_tx_set_filter(cl_hw, *total_flags, false);
+
+ *total_flags &= ~(1 << 31);
+}
+
+int cl_ops_set_key(struct ieee80211_hw *hw,
+ enum set_key_cmd cmd,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ return cl_key_set(cl_hw, cmd, vif, sta, key);
+}
+
+void cl_ops_sw_scan_start(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ const u8 *mac_addr)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ cl_hw->sw_scan_in_progress = 1;
+
+ if (cl_hw->conf->ce_radio_on &&
+ cl_radio_is_off(cl_hw) &&
+ vif->type == NL80211_IFTYPE_STATION)
+ cl_radio_on(cl_hw);
+
+ if (cl_dfs_is_in_cac(cl_hw))
+ cl_dfs_force_cac_end(cl_hw);
+}
+
+void cl_ops_sw_scan_complete(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ cl_hw->sw_scan_in_progress = 0;
+}
+
+int cl_ops_sta_state(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ enum ieee80211_sta_state old_state, enum ieee80211_sta_state new_state)
+{
+ struct cl_hw *cl_hw = hw->priv;
+ int error = 0;
+
+ if (old_state == new_state)
+ return 0;
+
+ if (old_state == IEEE80211_STA_NOTEXIST &&
+ new_state == IEEE80211_STA_NONE) {
+ cl_sta_init_sta(cl_hw, sta);
+ } else if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_ASSOC) {
+ error = cl_sta_add(cl_hw, vif, sta);
+ } else if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTH) {
+ cl_sta_remove(cl_hw, vif, sta);
+ }
+
+ return error;
+}
+
+void cl_ops_sta_notify(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ enum sta_notify_cmd cmd, struct ieee80211_sta *sta)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)hw->priv;
+ struct cl_sta *cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+ bool is_ps = (bool)!cmd;
+
+ cl_sta_ps_notify(cl_hw, cl_sta, is_ps);
+}
+
+int cl_ops_conf_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ u16 ac_queue,
+ const struct ieee80211_tx_queue_params *params)
+{
+ /*
+ * Configure TX queue parameters (EDCF (aifs, cw_min, cw_max),
+ * bursting) for a hardware TX queue.
+ * Returns a negative error code on failure.
+ * The callback can sleep.
+ */
+
+ /* We only handle STA edca here */
+ if (vif->type == NL80211_IFTYPE_STATION) {
+ struct cl_hw *cl_hw = hw->priv;
+ struct ieee80211_he_mu_edca_param_ac_rec mu_edca = {0};
+ struct edca_params edca_params = {
+ .aifsn = (u8)(params->aifs),
+ .cw_min = (u8)(ilog2(params->cw_min + 1)),
+ .cw_max = (u8)(ilog2(params->cw_max + 1)),
+ .txop = (u8)(params->txop)
+ };
+
+ if (cl_hw->wireless_mode > WIRELESS_MODE_HT_VHT)
+ memcpy(&mu_edca, &params->mu_edca_param_rec, sizeof(mu_edca));
+
+ cl_edca_set(cl_hw, cl_ac2edca[ac_queue], &edca_params, &mu_edca);
+ }
+ return 0;
+}
+
+void cl_ops_sta_rc_update(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u32 changed)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)hw->priv;
+ struct cl_sta *cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+ struct cl_wrs_sta *wrs_sta = &cl_sta->wrs_sta;
+ u8 bw = wrs_sta->max_rate_cap.bw;
+ u8 nss = wrs_sta->max_rate_cap.nss;
+
+ if (changed & IEEE80211_RC_SMPS_CHANGED)
+ cl_wrs_api_set_smps_mode(cl_hw, sta, sta->bandwidth);
+
+ WARN_ON(sta->rx_nss == 0);
+ if (changed & IEEE80211_RC_NSS_CHANGED)
+ nss = min_t(u8, sta->rx_nss, WRS_SS_MAX) - 1;
+
+ if (changed & IEEE80211_RC_BW_CHANGED)
+ bw = sta->bandwidth;
+
+ if ((changed & IEEE80211_RC_NSS_CHANGED) || (changed & IEEE80211_RC_BW_CHANGED))
+ cl_wrs_api_nss_or_bw_changed(cl_hw, sta, nss, bw);
+}
+
+int cl_ops_ampdu_action(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)hw->priv;
+ struct cl_sta *cl_sta = IEEE80211_STA_TO_CL_STA(params->sta);
+ int ret = 0;
+
+ switch (params->action) {
+ case IEEE80211_AMPDU_RX_START:
+ ret = cl_ampdu_rx_start(cl_hw, cl_sta, params->tid,
+ params->ssn, params->buf_size);
+ break;
+ case IEEE80211_AMPDU_RX_STOP:
+ cl_ampdu_rx_stop(cl_hw, cl_sta, params->tid);
+ break;
+ case IEEE80211_AMPDU_TX_START:
+ ret = cl_ampdu_tx_start(cl_hw, vif, cl_sta, params->tid,
+ params->ssn);
+ break;
+ case IEEE80211_AMPDU_TX_OPERATIONAL:
+ ret = cl_ampdu_tx_operational(cl_hw, cl_sta, params->tid,
+ params->buf_size, params->amsdu);
+ break;
+ case IEEE80211_AMPDU_TX_STOP_CONT:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+ ret = cl_ampdu_tx_stop(cl_hw, vif, params->action, cl_sta,
+ params->tid);
+ break;
+ default:
+ pr_warn("Error: Unknown AMPDU action (%d)\n", params->action);
+ }
+
+ return ret;
+}
+
+int cl_ops_post_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ /* TODO: Need to handle post switch */
+ return 0;
+}
+
+void cl_ops_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, bool drop)
+{
+ struct cl_hw *cl_hw = hw->priv;
+ int flush_duration;
+
+ if (test_bit(CL_DEV_HW_RESTART, &cl_hw->drv_flags)) {
+ cl_dbg_verbose(cl_hw, ": bypassing (CL_DEV_HW_RESTART set)\n");
+ return;
+ }
+
+ /* Wait for a maximum time of 200ms until all pending frames are flushed */
+ for (flush_duration = 0; flush_duration < 200; flush_duration++) {
+ if (!cl_txq_frames_pending(cl_hw))
+ return;
+
+ /* Lets sleep and hope for the best */
+ usleep_range(1000, 2000);
+ }
+}
+
+bool cl_ops_tx_frames_pending(struct ieee80211_hw *hw)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ return cl_txq_frames_pending(cl_hw);
+}
+
+void cl_ops_reconfig_complete(struct ieee80211_hw *hw,
+ enum ieee80211_reconfig_type reconfig_type)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ cl_recovery_reconfig_complete(cl_hw);
+}
+
+int cl_ops_get_txpower(struct ieee80211_hw *hw, struct ieee80211_vif *vif, int *dbm)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ if (cl_hw->phy_data_info.data)
+ *dbm = cl_power_get_max(cl_hw);
+ else
+ *dbm = 0;
+
+ return 0;
+}
+
+int cl_ops_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
+{
+ /* TODO: Fix this call */
+ return 0;
+}
+
+static void cl_ops_mgd_assoc(struct cl_hw *cl_hw, struct ieee80211_vif *vif)
+{
+ struct ieee80211_sub_if_data *sdata = container_of(vif, struct ieee80211_sub_if_data, vif);
+ struct cl_vif *cl_vif = (struct cl_vif *)vif->drv_priv;
+ struct ieee80211_sta *sta = ieee80211_find_sta(vif, sdata->u.mgd.bssid);
+
+ if (!sta) {
+ /* Should never happen */
+ cl_dbg_verbose(cl_hw, "sta is NULL !!!\n");
+ return;
+ }
+
+ cl_sta_mgd_add(cl_hw, cl_vif, sta);
+
+ if (cl_hw_get_iface_conf(cl_hw) == CL_IFCONF_REPEATER) {
+ cl_vif_ap_tx_enable(cl_hw, true);
+ set_bit(CL_DEV_REPEATER, &cl_hw->drv_flags);
+ }
+}
+
+static void cl_ops_mgd_disassoc(struct cl_hw *cl_hw)
+{
+ if (cl_hw_get_iface_conf(cl_hw) == CL_IFCONF_REPEATER) {
+ cl_vif_ap_tx_enable(cl_hw, false);
+ clear_bit(CL_DEV_REPEATER, &cl_hw->drv_flags);
+ }
+}
+
+void cl_ops_event_callback(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ const struct ieee80211_event *event)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ if (event->type == MLME_EVENT) {
+ if (event->u.mlme.data == ASSOC_EVENT &&
+ event->u.mlme.status == MLME_SUCCESS)
+ cl_ops_mgd_assoc(cl_hw, vif);
+ else if (event->u.mlme.data == DEAUTH_TX_EVENT ||
+ event->u.mlme.data == DEAUTH_RX_EVENT)
+ cl_ops_mgd_disassoc(cl_hw);
+ }
+}
+
+/* This function is required for PS flow - do not remove */
+int cl_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set)
+{
+ return 0;
+}
+
+int cl_ops_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant)
+{
+ struct cl_hw *cl_hw = hw->priv;
+
+ *rx_ant = cl_hw->mask_num_antennas;
+ *tx_ant = cl_hw->mask_num_antennas;
+
+ return 0;
+}
+
+u32 cl_ops_get_expected_throughput(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta)
+{
+ struct cl_sta *cl_sta = (struct cl_sta *)sta->drv_priv;
+
+ return cl_sta->wrs_sta.tx_su_params.data_rate;
+}
+
+void cl_ops_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo)
+{
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_sta *cl_sta = NULL;
+ u64 total_tx_success = 0, total_tx_fail = 0;
+ struct cl_wrs_params *wrs_params = NULL;
+
+ if (!sta)
+ return;
+
+ cl_sta = IEEE80211_STA_TO_CL_STA(sta);
+
+ /*
+ * Since cl8k implements rate control algorithm (sets IEEE80211_HW_HAS_RATE_CONTROL)
+ * it is needed to initialize both rx/tx bitrates manually
+ */
+ cl_wrs_lock_bh(&cl_hw->wrs_db);
+ wrs_params = &cl_sta->wrs_sta.tx_su_params;
+ cl_wrs_fill_sinfo_rates(&sinfo->txrate, wrs_params, cl_sta);
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+ cl_wrs_unlock_bh(&cl_hw->wrs_db);
+
+ /* mac80211 will fill sinfo stats if driver not set sinfo->filled flag */
+ if (!cl_hw->conf->ci_stats_en)
+ return;
+
+ cl_stats_get_tx(cl_hw, cl_sta, &total_tx_success, &total_tx_fail);
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_PACKETS);
+ sinfo->tx_packets = total_tx_success;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED);
+ sinfo->tx_failed = total_tx_fail;
+
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_PACKETS);
+ sinfo->rx_packets = cl_stats_get_rx(cl_hw, cl_sta);
+
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BYTES64);
+ sinfo->tx_bytes = cl_sta->tx_bytes;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_BYTES64);
+ sinfo->rx_bytes = cl_sta->rx_bytes;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES);
+ sinfo->tx_retries = cl_sta->retry_count;
+
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
+ sinfo->signal_avg = cl_stats_get_rssi(cl_hw, cl_sta);
+}
+
+int cl_ops_get_survey(struct ieee80211_hw *hw, int idx, struct survey_info *survey)
+{
+ struct ieee80211_conf *conf = &hw->conf;
+ struct cl_hw *cl_hw = hw->priv;
+ struct ieee80211_supported_band *sband = hw->wiphy->bands[conf->chandef.chan->band];
+ struct cl_chan_scanner *scanner = cl_hw->scanner;
+ struct cl_channel_stats *scanned_channel = NULL;
+ int chan_num;
+ u8 i;
+
+ if (idx >= sband->n_channels)
+ return -ENOENT;
+
+ survey->channel = &sband->channels[idx];
+ chan_num = ieee80211_frequency_to_channel(sband->channels[idx].center_freq);
+
+ for (i = 0; i < scanner->channels_num; i++) {
+ if (scanner->channels[i].channel == chan_num) {
+ scanned_channel = &scanner->channels[i];
+ break;
+ }
+ }
+
+ if (!scanned_channel) {
+ survey->filled = 0;
+ return 0;
+ }
+
+ survey->filled = SURVEY_INFO_TIME | SURVEY_INFO_TIME_SCAN |
+ SURVEY_INFO_NOISE_DBM | SURVEY_INFO_TIME_TX |
+ SURVEY_INFO_TIME_RX | SURVEY_INFO_TIME_BUSY |
+ SURVEY_INFO_TIME_EXT_BUSY;
+
+ survey->noise = scanned_channel->ch_noise;
+
+ survey->time = scanned_channel->scan_time_ms;
+ survey->time_scan = survey->time;
+
+ survey->time_rx = div64_u64(scanned_channel->util_time_rx, USEC_PER_MSEC);
+ survey->time_tx = div64_u64(scanned_channel->util_time_tx, USEC_PER_MSEC);
+
+ survey->time_busy = div64_u64(scanned_channel->util_time_busy, USEC_PER_MSEC);
+ survey->time_ext_busy = survey->time_busy;
+
+ return 0;
+}
+
+static void cl_scan_completion_cb(struct cl_hw *cl_hw, void *arg)
+{
+ struct cl_chan_scanner *scanner = cl_hw->scanner;
+ struct cfg80211_scan_info info = {
+ .aborted = scanner->scan_aborted,
+ };
+
+ cl_dbg_trace(cl_hw, "Completed scan request, aborted: %u\n", info.aborted);
+
+ cl_scan_channel_switch(cl_hw, scanner->prescan_channel, scanner->prescan_bw, true);
+ ieee80211_scan_completed(cl_hw->hw, &info);
+}
+
+int cl_ops_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *hw_req)
+{
+ struct cfg80211_scan_request *req = &hw_req->req;
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_chan_scanner *scanner = cl_hw->scanner;
+ u8 scan_channels[MAX_CHANNELS] = {0};
+ u8 i;
+ int ret = 0;
+
+ cl_dbg_trace(cl_hw, "Hardware scan request: n_channels:%u, n_ssids:%d\n",
+ req->n_channels, req->n_ssids);
+
+ if (cl_hw->conf->ce_radio_on && cl_radio_is_off(cl_hw))
+ cl_radio_on(cl_hw);
+
+ ret = mutex_lock_interruptible(&scanner->cl_hw->set_channel_mutex);
+ if (ret != 0)
+ return ret;
+ scanner->prescan_bw = cl_hw->bw;
+ scanner->prescan_channel = cl_hw->channel;
+ mutex_unlock(&scanner->cl_hw->set_channel_mutex);
+
+ if (req->n_ssids > 0) {
+ /*
+ * This is active scan request. We do not support it yet, so we
+ * need to force the mac80211 to fallback to the sw_scan.
+ */
+ cl_dbg_trace(cl_hw, "activating fall-back strategy - sw_scan\n");
+ return 1;
+ }
+
+ if (req->n_channels > ARRAY_SIZE(scan_channels)) {
+ cl_dbg_warn(cl_hw, "invalid number of channels to scan: %u\n",
+ req->n_channels);
+ return -ERANGE;
+ }
+
+ for (i = 0; i < req->n_channels; ++i) {
+ if (req->channels[i]->band != cl_hw->nl_band) {
+ cl_dbg_warn(cl_hw, "band %u is invalid\n", req->channels[i]->band);
+ return -EINVAL;
+ }
+ scan_channels[i] = ieee80211_frequency_to_channel(req->channels[i]->center_freq);
+ }
+
+ ret = cl_trigger_off_channel_scan(scanner, req->duration, 0,
+ scan_channels, CHNL_BW_20, req->n_channels,
+ cl_scan_completion_cb, NULL);
+ return ret;
+}
+
+void cl_ops_cancel_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct cl_hw *cl_hw = hw->priv;
+ struct cl_chan_scanner *scanner = cl_hw->scanner;
+
+ if (!cl_is_scan_in_progress(scanner))
+ return;
+
+ cl_abort_scan(scanner);
+ wait_event_interruptible_timeout(scanner->wq,
+ !cl_is_scan_in_progress(scanner),
+ msecs_to_jiffies(MSEC_PER_SEC));
+}
--
2.36.1


2022-05-24 16:08:24

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 78/96] cl8k: add tcv.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/tcv.c | 1259 ++++++++++++++++++++++++
1 file changed, 1259 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/tcv.c

diff --git a/drivers/net/wireless/celeno/cl8k/tcv.c b/drivers/net/wireless/celeno/cl8k/tcv.c
new file mode 100644
index 000000000000..1a17c4f4445a
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/tcv.c
@@ -0,0 +1,1259 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/slab.h>
+#include <linux/moduleparam.h>
+#include <linux/uaccess.h>
+
+#include "chip.h"
+#include "recovery.h"
+#include "vns.h"
+#include "mac80211.h"
+#include "config.h"
+#include "rfic.h"
+#include "e2p.h"
+#include "hw.h"
+#include "radio.h"
+#include "utils.h"
+#include "tcv.h"
+
+#define CL_TX_BCN_PENDING_CHAIN_MIN_TIME 10 /* Usec */
+#define CL_MAX_NUM_OF_RETRY 15
+#define INVALID_CALIB_RX_GAIN 0xff
+
+static struct cl_tcv_conf conf = {
+ .ce_debug_level = DBG_LVL_ERROR,
+ .ce_radio_on = true,
+ .ce_ps_ctrl_enabled = true,
+ .ci_ieee80211h = false,
+ .ci_max_bss_num = ARRAY_SIZE(((struct cl_hw *)0)->addresses),
+ .ci_short_guard_interval = 1,
+ .ci_max_mpdu_len = IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991,
+ .ci_max_ampdu_len_exp = IEEE80211_VHT_MAX_AMPDU_1024K,
+ .ce_dsp_code = "fwC.hex",
+ .ce_dsp_data = "fwD.hex",
+ .ce_dsp_external_data = "fwD.ext.hex",
+ .ci_uapsd_en = true,
+ .ce_eirp_regulatory_op_en = true,
+ .ce_eirp_regulatory_prod_en = false,
+ .ci_agg_tx = true,
+ .ci_agg_rx = true,
+ .ce_txldpc_en = true,
+ .ci_ht_rxldpc_en = true,
+ .ci_vht_rxldpc_en = true,
+ .ci_he_rxldpc_en = true,
+ .ci_cs_required = false,
+ .ci_rx_sensitivity_prod = {
+ [0 ... MAX_ANTENNAS - 1] = -100,
+ },
+ .ci_rx_sensitivity_op = {
+ [0 ... MAX_ANTENNAS - 1] = -100,
+ },
+ .ci_min_he_en = false,
+ .ce_cck_tx_ant_mask = 0x1,
+ .ce_cck_rx_ant_mask = 0x1,
+ .ce_rx_nss = 4,
+ .ce_tx_nss = 4,
+ .ce_num_antennas = 4,
+ .ce_max_agg_size_tx = IEEE80211_MAX_AMPDU_BUF_HE,
+ .ce_max_agg_size_rx = IEEE80211_MAX_AMPDU_BUF_HE,
+ .ce_rxamsdu_en = true,
+ .ce_txamsdu_en = CL_AMSDU_TX_PAYLOAD_MAX,
+ .ci_tx_amsdu_min_data_rate = 26, /* 26Mbps (= BW 20, NSS 0, MCS 3, GI 0) */
+ .ci_tx_sw_amsdu_max_packets = 0,
+ .ci_tx_packet_limit = 5000,
+ .ci_sw_txhdr_pool = 0,
+ .ci_amsdu_txhdr_pool = 0,
+ .ci_tx_queue_size_agg = 500,
+ .ci_tx_queue_size_single = 50,
+ .ci_tx_push_cntrs_stat_en = false,
+ .ci_traffic_mon_en = false,
+ .ci_ipc_rxbuf_size = {
+ [CL_RX_BUF_RXM] = IPC_RXBUF_SIZE,
+ [CL_RX_BUF_FW] = IPC_RXBUF_SIZE
+ },
+ .ce_max_retry = 8,
+ .ce_short_retry_limit = 4,
+ .ce_long_retry_limit = 4,
+ .ci_assoc_auth_retry_limit = 0,
+ .ci_cap_bandwidth = CHNL_BW_MAX,
+ .ci_chandef_channel = INVALID_CHAN_IDX,
+ .ci_chandef_bandwidth = CHNL_BW_MAX,
+ .ci_cck_in_hw_mode = true,
+ .ce_temp_comp_slope = 8,
+ .ci_fw_dbg_severity = CL_MACFW_DBG_SEV_WARNING,
+ .ci_fw_dbg_module = 0x0FFFFF,
+ .ci_lcu_dbg_cfg_inx = 4,
+ .ci_dsp_lcu_mode = 0,
+ .ci_hal_idle_to = CL_DEFAULT_HAL_IDLE_TIMEOUT,
+ .ci_tx_ac0_to = CL_TX_DEFAULT_AC0_TIMEOUT,
+ .ci_tx_ac1_to = CL_TX_DEFAULT_AC1_TIMEOUT,
+ .ci_tx_ac2_to = CL_TX_DEFAULT_AC2_TIMEOUT,
+ .ci_tx_ac3_to = CL_TX_DEFAULT_AC3_TIMEOUT,
+ .ci_tx_bcn_to = CL_TX_DEFAULT_BCN_TIMEOUT,
+ .ce_hardware_power_table = {0},
+ .ce_arr_gain = "0,3,4.75,6,7,7.75",
+ .ce_bf_gain_2_ant = "0",
+ .ce_bf_gain_3_ant = "0",
+ .ce_bf_gain_4_ant = "0",
+ .ce_bf_gain_5_ant = "0",
+ .ce_bf_gain_6_ant = "0",
+ .ce_ant_gain = "0",
+ .ce_ant_gain_36_64 = "0",
+ .ce_ant_gain_100_140 = "0",
+ .ce_ant_gain_149_165 = "0",
+ .ci_min_ant_pwr = "0",
+ .ci_bw_factor = "0,0,0,0",
+ .ce_mcast_rate = 0,
+ .ce_dyn_mcast_rate_en = false,
+ .ce_dyn_bcast_rate_en = false,
+ .ce_default_mcs_ofdm = 0,
+ .ce_default_mcs_cck = 0,
+ .ce_prot_log_nav_en = false,
+ .ce_prot_mode = TXL_PROT_RTS_FW,
+ .ce_prot_rate_format = 1,
+ .ce_prot_rate_mcs = 4,
+ .ce_prot_rate_pre_type = 0,
+ .ce_bw_signaling_mode = 0,
+ .ci_dyn_cts_sta_thr = 2,
+ .ci_vns_pwr_limit = 0,
+ .ci_vns_pwr_mode = VNS_MODE_ALL,
+ .ci_vns_rssi_auto_resp_thr = -40,
+ .ci_vns_rssi_thr = -40,
+ .ci_vns_rssi_hys = 3,
+ .ci_vns_maintenance_time = 2000,
+ .ce_bcn_tx_path_min_time = 1000,
+ .ci_backup_bcn_en = true,
+ .ce_tx_txop_cut_en = true,
+ .ci_bcns_flushed_cnt_thr = 3,
+ .ci_phy_err_prevents_phy_dump = false,
+ .ci_tx_rx_delay = 0,
+ .ci_fw_assert_time_diff_sec = 5,
+ .ci_fw_assert_storm_detect_thd = 10,
+ .ce_hw_assert_time_max = CL_HW_ASSERT_TIME_MAX,
+ .ce_bg_assert_print = 1,
+ .ce_fw_watchdog_mode = FW_WD_INTERNAL_RECOVERY,
+ .ce_fw_watchdog_limit_count = 5,
+ .ce_fw_watchdog_limit_time = 30 * 1000, /* Msecs */
+ .ci_rx_remote_cpu_drv = -1,
+ .ci_rx_remote_cpu_mac = -1,
+ .ci_pending_queue_size = 500,
+ .ce_tx_power_control = 100,
+ .ce_acs_coex_en = false,
+ .ci_dfs_initial_gain = 77,
+ .ci_dfs_agc_cd_th = 48,
+ .ci_dfs_long_pulse_min = 100,
+ .ci_dfs_long_pulse_max = 5000,
+ .ce_dfs_tbl_overwrite = {0},
+ /* 6G */
+ .ce_ppmcs_offset_he_6g = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ [WRS_MCS_10] = 0,
+ [WRS_MCS_11] = 0,
+ },
+ /* 5G */
+ .ce_ppmcs_offset_he_36_64 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ [WRS_MCS_10] = 0,
+ [WRS_MCS_11] = 0,
+ },
+ .ce_ppmcs_offset_he_100_140 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ [WRS_MCS_10] = 0,
+ [WRS_MCS_11] = 0,
+ },
+ .ce_ppmcs_offset_he_149_165 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ [WRS_MCS_10] = 0,
+ [WRS_MCS_11] = 0,
+ },
+ .ce_ppmcs_offset_ht_vht_36_64 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ },
+ .ce_ppmcs_offset_ht_vht_100_140 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ },
+ .ce_ppmcs_offset_ht_vht_149_165 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ },
+ .ce_ppmcs_offset_ofdm_36_64 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ },
+ .ce_ppmcs_offset_ofdm_100_140 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ },
+ .ce_ppmcs_offset_ofdm_149_165 = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ },
+ /* 24G */
+ .ce_ppmcs_offset_he = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ [WRS_MCS_8] = 0,
+ [WRS_MCS_9] = 0,
+ [WRS_MCS_10] = 0,
+ [WRS_MCS_11] = 0,
+ },
+ .ce_ppmcs_offset_ht = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ },
+ .ce_ppmcs_offset_ofdm = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ [WRS_MCS_4] = 0,
+ [WRS_MCS_5] = 0,
+ [WRS_MCS_6] = 0,
+ [WRS_MCS_7] = 0,
+ },
+ .ce_ppmcs_offset_cck = {
+ [WRS_MCS_0] = 0,
+ [WRS_MCS_1] = 0,
+ [WRS_MCS_2] = 0,
+ [WRS_MCS_3] = 0,
+ },
+ .ce_ppbw_offset = {
+ [CHNL_BW_20] = 0,
+ [CHNL_BW_40] = 0,
+ [CHNL_BW_80] = 0,
+ [CHNL_BW_160] = 0,
+ },
+ .ce_power_offset_prod_en = true,
+ .ci_bf_en = false,
+ .ci_bf_max_nss = 2,
+ .ce_sounding_interval_coefs = {
+ [SOUNDING_INTERVAL_COEF_MIN_INTERVAL] = 100,
+ [SOUNDING_INTERVAL_COEF_STA_STEP] = 4,
+ [SOUNDING_INTERVAL_COEF_INTERVAL_STEP] = 50,
+ [SOUNDING_INTERVAL_COEF_MAX_INTERVAL] = 500,
+ },
+ .ci_rate_fallback = {
+ [CL_RATE_FALLBACK_COUNT_SU] = 4,
+ [CL_RATE_FALLBACK_COUNT_MU] = 2,
+ [CL_RATE_FALLBACK_RETRY_COUNT_THR] = 2,
+ [CL_RATE_FALLBACK_BA_PER_THR] = 25,
+ [CL_RATE_FALLBACK_BA_NOT_RECEIVED_THR] = 2,
+ [CL_RATE_FALLBACK_DISABLE_MCS] = 1
+ },
+ .ce_rx_pkts_budget = 512,
+ .ci_band_num = 5,
+ .ci_mult_ampdu_in_txop_en = false,
+ .ce_wmm_aifsn = {
+ [AC_BK] = 3,
+ [AC_BE] = 7,
+ [AC_VI] = 1,
+ [AC_VO] = 1
+ },
+ .ce_wmm_cwmin = {
+ [AC_BK] = 4,
+ [AC_BE] = 4,
+ [AC_VI] = 3,
+ [AC_VO] = 2
+ },
+ .ce_wmm_cwmax = {
+ [AC_BK] = 10,
+ [AC_BE] = 10,
+ [AC_VI] = 4,
+ [AC_VO] = 3
+ },
+ .ce_wmm_txop = {
+ [AC_BK] = 0,
+ [AC_BE] = 0,
+ [AC_VI] = 94,
+ [AC_VO] = 47
+ },
+ .ci_su_force_min_spacing = CL_TX_MPDU_SPACING_INVALID,
+ .ci_mu_force_min_spacing = CL_TX_MPDU_SPACING_INVALID,
+ .ci_tf_mac_pad_dur = 0,
+ .ci_cca_timeout = 300,
+ .ce_tx_ba_session_timeout = 30000,
+ .ci_motion_sense_en = true,
+ .ci_motion_sense_rssi_thr = 8,
+ .ci_wrs_max_bw = CHNL_BW_160,
+ .ci_wrs_min_bw = CHNL_BW_20,
+ .ci_wrs_fixed_rate = {
+ [WRS_FIXED_PARAM_MODE] = -1,
+ [WRS_FIXED_PARAM_BW] = -1,
+ [WRS_FIXED_PARAM_NSS] = -1,
+ [WRS_FIXED_PARAM_MCS] = -1,
+ [WRS_FIXED_PARAM_GI] = -1
+ },
+ .ce_he_mcs_nss_supp_tx = {
+ [WRS_SS_1 ... WRS_SS_4] = 11,
+ },
+ .ce_he_mcs_nss_supp_rx = {
+ [WRS_SS_1 ... WRS_SS_4] = 11,
+ },
+ .ce_vht_mcs_nss_supp_tx = {
+ [WRS_SS_1 ... WRS_SS_4] = 9,
+ },
+ .ce_vht_mcs_nss_supp_rx = {
+ [WRS_SS_1 ... WRS_SS_4] = 9,
+ },
+ .ci_pe_duration = U8_MAX,
+ .ci_pe_duration_bcast = PPE_16US,
+ .ci_gain_update_enable = 1,
+ .ci_mcs_sig_b = 0,
+ .ci_spp_ksr_value = 1,
+ .ci_rx_padding_en = false,
+ .ci_stats_en = false,
+ .ci_bar_disable = false,
+ .ci_ofdm_only = true,
+ .ci_hw_bsr = false,
+ .ci_drop_to_lower_bw = false,
+ .ci_force_icmp_single = false,
+ .ce_wrs_rx_en = false,
+ .ci_hr_factor = {
+ [CHNL_BW_20] = 2,
+ [CHNL_BW_40] = 2,
+ [CHNL_BW_80] = 2,
+ [CHNL_BW_160] = 1
+ },
+ .ci_csd_en = true,
+ .ci_signal_extension_en = false,
+ .ci_vht_cap_24g = false,
+ .ci_tx_digital_gain = 0x28282828,
+ .ci_tx_digital_gain_cck = 0x63636363,
+ .ci_ofdm_cck_power_offset = -13,
+ .ci_mac_clk_gating_en = true,
+ .ci_phy_clk_gating_en = true,
+ .ci_imaging_blocker = false,
+ .ci_sensing_ndp_tx_chain_mask = NDP_TX_PHY0,
+ .ci_sensing_ndp_tx_bw = CHNL_BW_MAX,
+ .ci_sensing_ndp_tx_format = FORMATMOD_NON_HT,
+ .ci_sensing_ndp_tx_num_ltf = LTF_X1,
+ .ci_calib_ant_tx = {
+ [0 ... MAX_ANTENNAS - 1] = U8_MAX,
+ },
+ .ci_calib_ant_rx = {
+ [0 ... MAX_ANTENNAS - 1] = U8_MAX,
+ },
+ .ci_cca_ed_rise_thr_dbm = -62,
+ .ci_cca_ed_fall_thr_dbm = -65,
+ .ci_cca_cs_en = 1,
+ .ci_cca_modem_en = 0xf,
+ .ci_cca_main_ant = 0,
+ .ci_cca_second_ant = 1,
+ .ci_cca_flag0_ctrl = 0x8,
+ .ci_cca_flag1_ctrl = 0x8,
+ .ci_cca_flag2_ctrl = 0x2,
+ .ci_cca_flag3_ctrl = 0xa,
+ .ci_cca_gi_rise_thr_dbm = -72,
+ .ci_cca_gi_fall_thr_dbm = -75,
+ .ci_cca_gi_pow_lim_dbm = -59,
+ .ci_cca_ed_en = 0x7ff,
+ .ci_cca_gi_en = 0,
+ .ci_rx_he_mu_ppdu = false,
+ .ci_fast_rx_en = true,
+ .ci_distance_auto_resp_all = 0,
+ .ci_distance_auto_resp_msta = 0,
+ .ci_fw_disable_recovery = false,
+ .ce_listener_en = 0,
+ .ci_tx_delay_tstamp_en = false,
+ .ci_calib_tx_init_tx_gain = {
+ [0 ... MAX_ANTENNAS - 1] = CALIB_TX_GAIN_DEFAULT,
+ },
+ .ci_calib_tx_init_rx_gain = {
+ [0 ... MAX_ANTENNAS - 1] = INVALID_CALIB_RX_GAIN,
+ },
+ .ci_calib_rx_init_tx_gain = {
+ [0 ... MAX_ANTENNAS - 1] = CALIB_TX_GAIN_DEFAULT,
+ },
+ .ci_calib_rx_init_rx_gain = {
+ [0 ... MAX_ANTENNAS - 1] = INVALID_CALIB_RX_GAIN,
+ },
+ .ci_calib_conf_rx_gain_upper_limit = INVALID_CALIB_RX_GAIN,
+ .ci_calib_conf_rx_gain_lower_limit = INVALID_CALIB_RX_GAIN,
+ .ci_calib_conf_tone_vector_20bw = {6, 10, 14, 18, 22, 24, 26, 27},
+ .ci_calib_conf_tone_vector_40bw = {10, 18, 26, 34, 41, 48, 53, 58},
+ .ci_calib_conf_tone_vector_80bw = {18, 34, 50, 66, 82, 98, 110, 122},
+ .ci_calib_conf_tone_vector_160bw = {18, 34, 66, 98, 130, 164, 224, 250},
+ .ci_calib_conf_gp_rad_trshld = GP_RAD_TRSHLD_DEFAULT,
+ .ci_calib_conf_ga_lin_upper_trshld = GA_LIN_UPPER_TRSHLD_DEFAULT,
+ .ci_calib_conf_ga_lin_lower_trshld = GA_LIN_LOWER_TRSHLD_DEFAULT,
+ .ci_calib_conf_singletons_num = SINGLETONS_NUM_DEFAULT,
+ .ci_calib_conf_rampup_time = RAMPUP_TIME,
+ .ci_calib_conf_lo_coarse_step = LO_COARSE_STEP,
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ .ci_calib_conf_lo_fine_step = LO_FINE_STEP,
+ .ci_calib_eeprom_channels_20mhz = {0},
+ .ci_calib_eeprom_channels_40mhz = {0},
+ .ci_calib_eeprom_channels_80mhz = {0},
+ .ci_calib_eeprom_channels_160mhz = {0},
+#endif
+ .ci_mesh_basic_rates = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+};
+
+static int cl_tcv_update_config(struct cl_hw *cl_hw, char *name, char *value)
+{
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ int ret = -ENOENT;
+
+ do {
+ READ_S8(ce_debug_level);
+ READ_BOOL(ce_radio_on);
+ READ_BOOL(ce_ps_ctrl_enabled);
+ READ_BOOL(ci_ieee80211h);
+ READ_U8(ci_max_bss_num);
+ READ_U8(ci_short_guard_interval);
+ READ_U8(ci_max_mpdu_len);
+ READ_U8(ci_max_ampdu_len_exp);
+ READ_STR(ce_dsp_code);
+ READ_STR(ce_dsp_data);
+ READ_STR(ce_dsp_external_data);
+ READ_BOOL(ci_uapsd_en);
+ READ_BOOL(ce_eirp_regulatory_op_en);
+ READ_BOOL(ce_eirp_regulatory_prod_en);
+ READ_BOOL(ci_agg_tx);
+ READ_BOOL(ci_agg_rx);
+ READ_BOOL(ce_txldpc_en);
+ READ_BOOL(ci_ht_rxldpc_en);
+ READ_BOOL(ci_vht_rxldpc_en);
+ READ_BOOL(ci_he_rxldpc_en);
+ READ_BOOL(ci_cs_required);
+ READ_S8_ARR(ci_rx_sensitivity_prod, MAX_ANTENNAS);
+ READ_S8_ARR(ci_rx_sensitivity_op, MAX_ANTENNAS);
+ READ_BOOL(ci_min_he_en);
+ READ_U8(ce_cck_tx_ant_mask);
+ READ_U8(ce_cck_rx_ant_mask);
+ READ_U8(ce_rx_nss);
+ READ_U8(ce_tx_nss);
+ READ_U8(ce_num_antennas);
+ READ_U16(ce_max_agg_size_tx);
+ READ_U16(ce_max_agg_size_rx);
+ READ_BOOL(ce_rxamsdu_en);
+ READ_U8(ce_txamsdu_en);
+ READ_U16(ci_tx_amsdu_min_data_rate);
+ READ_U8(ci_tx_sw_amsdu_max_packets);
+ READ_U16(ci_tx_packet_limit);
+ READ_U16(ci_sw_txhdr_pool);
+ READ_U16(ci_amsdu_txhdr_pool);
+ READ_U16(ci_tx_queue_size_agg);
+ READ_U16(ci_tx_queue_size_single);
+ READ_BOOL(ci_tx_push_cntrs_stat_en);
+ READ_BOOL(ci_traffic_mon_en);
+ READ_U16_ARR(ci_ipc_rxbuf_size, CL_RX_BUF_MAX, true);
+ READ_U16(ce_max_retry);
+ READ_U8(ce_short_retry_limit);
+ READ_U8(ce_long_retry_limit);
+ READ_U8(ci_assoc_auth_retry_limit);
+ READ_U8(ci_cap_bandwidth);
+ READ_U32(ci_chandef_channel);
+ READ_U8(ci_chandef_bandwidth);
+ READ_BOOL(ci_cck_in_hw_mode);
+ READ_S8(ce_temp_comp_slope);
+ READ_U32(ci_fw_dbg_severity);
+ READ_U32(ci_fw_dbg_module);
+ READ_U8(ci_lcu_dbg_cfg_inx);
+ READ_U8(ci_dsp_lcu_mode);
+ READ_U32(ci_hal_idle_to);
+ READ_U32(ci_tx_ac0_to);
+ READ_U32(ci_tx_ac1_to);
+ READ_U32(ci_tx_ac2_to);
+ READ_U32(ci_tx_ac3_to);
+ READ_U32(ci_tx_bcn_to);
+ READ_STR(ce_hardware_power_table);
+ READ_STR(ce_arr_gain);
+ READ_STR(ce_bf_gain_2_ant);
+ READ_STR(ce_bf_gain_3_ant);
+ READ_STR(ce_bf_gain_4_ant);
+ READ_STR(ce_bf_gain_5_ant);
+ READ_STR(ce_bf_gain_6_ant);
+ READ_STR(ce_ant_gain);
+ READ_STR(ce_ant_gain_36_64);
+ READ_STR(ce_ant_gain_100_140);
+ READ_STR(ce_ant_gain_149_165);
+ READ_STR(ci_min_ant_pwr);
+ READ_STR(ci_bw_factor);
+ READ_U8(ce_mcast_rate);
+ READ_BOOL(ce_dyn_mcast_rate_en);
+ READ_BOOL(ce_dyn_bcast_rate_en);
+ READ_U8(ce_default_mcs_ofdm);
+ READ_U8(ce_default_mcs_cck);
+ READ_BOOL(ce_prot_log_nav_en);
+ READ_U8(ce_prot_mode);
+ READ_U8(ce_prot_rate_format);
+ READ_U8(ce_prot_rate_mcs);
+ READ_U8(ce_prot_rate_pre_type);
+ READ_U8(ce_bw_signaling_mode);
+ READ_U8(ci_dyn_cts_sta_thr);
+ READ_S8(ci_vns_pwr_limit);
+ READ_U8(ci_vns_pwr_mode);
+ READ_S8(ci_vns_rssi_auto_resp_thr);
+ READ_S8(ci_vns_rssi_thr);
+ READ_S8(ci_vns_rssi_hys);
+ READ_U16(ci_vns_maintenance_time);
+ READ_U16(ce_bcn_tx_path_min_time);
+ READ_BOOL(ci_backup_bcn_en);
+ READ_BOOL(ce_tx_txop_cut_en);
+ READ_U8(ci_bcns_flushed_cnt_thr);
+ READ_BOOL(ci_phy_err_prevents_phy_dump);
+ READ_U8(ci_tx_rx_delay);
+ READ_U8(ci_fw_assert_time_diff_sec);
+ READ_U8(ci_fw_assert_storm_detect_thd);
+ READ_U32(ce_hw_assert_time_max);
+ READ_U8(ce_bg_assert_print);
+ READ_U8(ce_fw_watchdog_mode);
+ READ_U8(ce_fw_watchdog_limit_count);
+ READ_U32(ce_fw_watchdog_limit_time);
+ READ_S8(ci_rx_remote_cpu_drv);
+ READ_S8(ci_rx_remote_cpu_mac);
+ READ_U16(ci_pending_queue_size);
+ READ_U8(ce_tx_power_control);
+ READ_BOOL(ce_acs_coex_en);
+ READ_U8(ci_dfs_initial_gain);
+ READ_U8(ci_dfs_agc_cd_th);
+ READ_U16(ci_dfs_long_pulse_min);
+ READ_U16(ci_dfs_long_pulse_max);
+ READ_STR(ce_dfs_tbl_overwrite);
+ READ_S8_ARR(ce_ppmcs_offset_he_6g, WRS_MCS_MAX_HE);
+ READ_S8_ARR(ce_ppmcs_offset_he_36_64, WRS_MCS_MAX_HE);
+ READ_S8_ARR(ce_ppmcs_offset_he_100_140, WRS_MCS_MAX_HE);
+ READ_S8_ARR(ce_ppmcs_offset_he_149_165, WRS_MCS_MAX_HE);
+ READ_S8_ARR(ce_ppmcs_offset_ht_vht_36_64, WRS_MCS_MAX_VHT);
+ READ_S8_ARR(ce_ppmcs_offset_ht_vht_100_140, WRS_MCS_MAX_VHT);
+ READ_S8_ARR(ce_ppmcs_offset_ht_vht_149_165, WRS_MCS_MAX_VHT);
+ READ_S8_ARR(ce_ppmcs_offset_ofdm_36_64, WRS_MCS_MAX_OFDM);
+ READ_S8_ARR(ce_ppmcs_offset_ofdm_100_140, WRS_MCS_MAX_OFDM);
+ READ_S8_ARR(ce_ppmcs_offset_ofdm_149_165, WRS_MCS_MAX_OFDM);
+ READ_S8_ARR(ce_ppmcs_offset_he, WRS_MCS_MAX_HE);
+ READ_S8_ARR(ce_ppmcs_offset_ht, WRS_MCS_MAX_HT);
+ READ_S8_ARR(ce_ppmcs_offset_ofdm, WRS_MCS_MAX_OFDM);
+ READ_S8_ARR(ce_ppmcs_offset_cck, WRS_MCS_MAX_CCK);
+ READ_S8_ARR(ce_ppbw_offset, CHNL_BW_MAX);
+ READ_BOOL(ce_power_offset_prod_en);
+ READ_BOOL(ci_bf_en);
+ READ_U8(ci_bf_max_nss);
+ READ_U16_ARR(ce_sounding_interval_coefs, SOUNDING_INTERVAL_COEF_MAX, true);
+ READ_U8_ARR(ci_rate_fallback, CL_RATE_FALLBACK_MAX, true);
+ READ_U16(ce_rx_pkts_budget);
+ READ_U8(ci_band_num);
+ READ_BOOL(ci_mult_ampdu_in_txop_en);
+ READ_U8_ARR(ce_wmm_aifsn, AC_MAX, true);
+ READ_U8_ARR(ce_wmm_cwmin, AC_MAX, true);
+ READ_U8_ARR(ce_wmm_cwmax, AC_MAX, true);
+ READ_U16_ARR(ce_wmm_txop, AC_MAX, true);
+ READ_U8(ci_su_force_min_spacing);
+ READ_U8(ci_mu_force_min_spacing);
+ READ_U8(ci_tf_mac_pad_dur);
+ READ_U32(ci_cca_timeout);
+ READ_U16(ce_tx_ba_session_timeout);
+ READ_BOOL(ci_motion_sense_en);
+ READ_S8(ci_motion_sense_rssi_thr);
+ READ_U8(ci_wrs_max_bw);
+ READ_U8(ci_wrs_min_bw);
+ READ_S8_ARR(ci_wrs_fixed_rate, WRS_FIXED_PARAM_MAX);
+ READ_U8_ARR(ce_he_mcs_nss_supp_tx, WRS_SS_MAX, true);
+ READ_U8_ARR(ce_he_mcs_nss_supp_rx, WRS_SS_MAX, true);
+ READ_U8_ARR(ce_vht_mcs_nss_supp_tx, WRS_SS_MAX, true);
+ READ_U8_ARR(ce_vht_mcs_nss_supp_rx, WRS_SS_MAX, true);
+ READ_U8(ci_pe_duration);
+ READ_U8(ci_pe_duration_bcast);
+ READ_U8(ci_gain_update_enable);
+ READ_U8(ci_mcs_sig_b);
+ READ_U8(ci_spp_ksr_value);
+ READ_BOOL(ci_rx_padding_en);
+ READ_BOOL(ci_stats_en);
+ READ_BOOL(ci_bar_disable);
+ READ_BOOL(ci_ofdm_only);
+ READ_BOOL(ci_hw_bsr);
+ READ_BOOL(ci_drop_to_lower_bw);
+ READ_BOOL(ci_force_icmp_single);
+ READ_BOOL(ci_csd_en);
+ READ_BOOL(ce_wrs_rx_en);
+ READ_U8_ARR(ci_hr_factor, CHNL_BW_MAX, true);
+ READ_BOOL(ci_signal_extension_en);
+ READ_BOOL(ci_vht_cap_24g);
+ READ_U32(ci_tx_digital_gain);
+ READ_U32(ci_tx_digital_gain_cck);
+ READ_S8(ci_ofdm_cck_power_offset);
+ READ_BOOL(ci_mac_clk_gating_en);
+ READ_BOOL(ci_phy_clk_gating_en);
+ READ_BOOL(ci_imaging_blocker);
+ READ_U8(ci_sensing_ndp_tx_chain_mask);
+ READ_U8(ci_sensing_ndp_tx_bw);
+ READ_U8(ci_sensing_ndp_tx_format);
+ READ_U8(ci_sensing_ndp_tx_num_ltf);
+ READ_U8_ARR(ci_calib_ant_tx, MAX_ANTENNAS, true);
+ READ_U8_ARR(ci_calib_ant_rx, MAX_ANTENNAS, true);
+ READ_S8(ci_cca_ed_rise_thr_dbm);
+ READ_S8(ci_cca_ed_fall_thr_dbm);
+ READ_U8(ci_cca_cs_en);
+ READ_U8(ci_cca_modem_en);
+ READ_U8(ci_cca_main_ant);
+ READ_U8(ci_cca_second_ant);
+ READ_U8(ci_cca_flag0_ctrl);
+ READ_U8(ci_cca_flag1_ctrl);
+ READ_U8(ci_cca_flag2_ctrl);
+ READ_U8(ci_cca_flag3_ctrl);
+ READ_S8(ci_cca_gi_rise_thr_dbm);
+ READ_S8(ci_cca_gi_fall_thr_dbm);
+ READ_S8(ci_cca_gi_pow_lim_dbm);
+ READ_U16(ci_cca_ed_en);
+ READ_U8(ci_cca_gi_en);
+ READ_BOOL(ci_rx_he_mu_ppdu);
+ READ_BOOL(ci_fast_rx_en);
+ READ_U8(ci_distance_auto_resp_all);
+ READ_U8(ci_distance_auto_resp_msta);
+ READ_BOOL(ci_fw_disable_recovery);
+ READ_BOOL(ce_listener_en);
+ READ_BOOL(ci_tx_delay_tstamp_en);
+ READ_U8_ARR(ci_calib_tx_init_tx_gain, MAX_ANTENNAS, true);
+ READ_U8_ARR(ci_calib_tx_init_rx_gain, MAX_ANTENNAS, true);
+ READ_U8_ARR(ci_calib_rx_init_tx_gain, MAX_ANTENNAS, true);
+ READ_U8_ARR(ci_calib_rx_init_rx_gain, MAX_ANTENNAS, true);
+ READ_U8(ci_calib_conf_rx_gain_upper_limit);
+ READ_U8(ci_calib_conf_rx_gain_lower_limit);
+ READ_U8_ARR(ci_calib_conf_tone_vector_20bw, IQ_NUM_TONES_REQ, true);
+ READ_U8_ARR(ci_calib_conf_tone_vector_40bw, IQ_NUM_TONES_REQ, true);
+ READ_U8_ARR(ci_calib_conf_tone_vector_80bw, IQ_NUM_TONES_REQ, true);
+ READ_U8_ARR(ci_calib_conf_tone_vector_160bw, IQ_NUM_TONES_REQ, true);
+ READ_U32(ci_calib_conf_gp_rad_trshld);
+ READ_U32(ci_calib_conf_ga_lin_upper_trshld);
+ READ_U32(ci_calib_conf_ga_lin_lower_trshld);
+ READ_U8(ci_calib_conf_singletons_num);
+ READ_U16(ci_calib_conf_rampup_time);
+ READ_U16(ci_calib_conf_lo_coarse_step);
+ READ_U16(ci_calib_conf_lo_fine_step);
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ if (cl_hw_is_tcv0(cl_hw)) {
+ READ_U16_ARR(ci_calib_eeprom_channels_20mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0, false);
+ READ_U16_ARR(ci_calib_eeprom_channels_40mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV0, false);
+ READ_U16_ARR(ci_calib_eeprom_channels_80mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV0, false);
+ READ_U16_ARR(ci_calib_eeprom_channels_160mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV0, false);
+ }
+ if (cl_hw_is_tcv1(cl_hw)) {
+ READ_U16_ARR(ci_calib_eeprom_channels_20mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1, false);
+ READ_U16_ARR(ci_calib_eeprom_channels_40mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV1, false);
+ READ_U16_ARR(ci_calib_eeprom_channels_80mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV1, false);
+ READ_U16_ARR(ci_calib_eeprom_channels_160mhz,
+ EEPROM_CALIB_DATA_ELEM_NUM_160MHZ_TCV1, false);
+ }
+#endif
+ READ_U16_ARR(ci_mesh_basic_rates, MESH_BASIC_RATE_MAX, false);
+ } while (0);
+
+ if (ret == -ENOENT) {
+ if (cl_config_is_non_driver_param(name))
+ ret = 0;
+ else
+ CL_DBG_ERROR(cl_hw, "No matching conf for nvram parameter %s\n", name);
+ }
+
+ return ret;
+}
+
+static int cl_tcv_set_all_params_from_buf(struct cl_hw *cl_hw, char *buf, size_t size)
+{
+ char *line = buf;
+ char name[MAX_PARAM_NAME_LENGTH];
+ char value[STR_LEN_256B];
+ char *begin;
+ char *end;
+ int ret = 0;
+ int name_length = 0;
+ int value_length = 0;
+
+ while (line && strlen(line) && (line != (buf + size))) {
+ if ((*line == '#') || (*line == '\n')) {
+ /* Skip comment or blank line */
+ line = strstr(line, "\n") + 1;
+ } else if (*line) {
+ begin = line;
+ end = strstr(begin, "=");
+
+ if (!end) {
+ ret = -EBADMSG;
+ goto exit;
+ }
+
+ end++;
+ name_length = end - begin;
+ value_length = strstr(end, "\n") - end + 1;
+
+ if (name_length >= MAX_PARAM_NAME_LENGTH) {
+ cl_dbg_err(cl_hw,
+ "Name too long (%u)\n", name_length);
+ ret = -EBADMSG;
+ goto exit;
+ }
+ if (value_length >= STR_LEN_256B) {
+ cl_dbg_err(cl_hw,
+ "Value too long (%u)\n", value_length);
+ ret = -EBADMSG;
+ goto exit;
+ }
+
+ snprintf(name, name_length, "%s", begin);
+ snprintf(value, value_length, "%s", end);
+
+ ret = cl_tcv_update_config(cl_hw, name, value);
+ if (ret)
+ goto exit;
+
+ line = strstr(line, "\n") + 1;
+ }
+ }
+
+exit:
+
+ return ret;
+}
+
+static bool cl_tcv_is_valid_min_spacing(u8 min_spacing)
+{
+ return ((min_spacing == 0) ||
+ (min_spacing == 1) ||
+ (min_spacing == 2) ||
+ (min_spacing == 3) ||
+ (min_spacing == 4) ||
+ (min_spacing == 6) ||
+ (min_spacing == 8) ||
+ (min_spacing == 10) ||
+ (min_spacing == 12) ||
+ (min_spacing == 14) ||
+ (min_spacing == 16) ||
+ (min_spacing == 18) ||
+ (min_spacing == 20) ||
+ (min_spacing == 24) ||
+ (min_spacing == CL_TX_MPDU_SPACING_INVALID));
+}
+
+static bool cl_tcv_is_valid_cca_config(struct cl_hw *cl_hw, struct cl_tcv_conf *conf)
+{
+ if (conf->ci_cca_ed_rise_thr_dbm <= conf->ci_cca_ed_fall_thr_dbm) {
+ CL_DBG_ERROR(cl_hw, "cca_ed_rise_thr_dbm (%u) <= cca_ed_fall_thr_dbm (%u)\n",
+ conf->ci_cca_ed_rise_thr_dbm, conf->ci_cca_ed_fall_thr_dbm);
+ return false;
+ }
+
+ if (conf->ci_cca_gi_rise_thr_dbm <= conf->ci_cca_gi_fall_thr_dbm) {
+ CL_DBG_ERROR(cl_hw, "cca_gi_rise_thr_dbm (%u) <= cca_gi_fall_thr_dbm (%u)\n",
+ conf->ci_cca_gi_rise_thr_dbm, conf->ci_cca_gi_fall_thr_dbm);
+ return false;
+ }
+
+ if (conf->ci_cca_gi_pow_lim_dbm <= conf->ci_cca_ed_rise_thr_dbm) {
+ CL_DBG_ERROR(cl_hw, "cca_gi_pow_lim_dbm (%u) <= cca_ed_rise_thr_dbm (%u)\n",
+ conf->ci_cca_gi_pow_lim_dbm, conf->ci_cca_ed_rise_thr_dbm);
+ return false;
+ }
+
+ return true;
+}
+
+static inline void cl_tcv_set_default_channel(u32 *channel, u32 value)
+{
+ if (*channel == INVALID_CHAN_IDX)
+ *channel = value;
+}
+
+static inline void cl_tcv_set_default_bandwidth(u8 *bw, u8 value)
+{
+ if (*bw == CHNL_BW_MAX)
+ *bw = value;
+}
+
+static inline bool cl_tcv_is_valid_bandwidth(u8 *bw, u8 max_value)
+{
+ return *bw <= max_value;
+}
+
+static bool cl_tcv_is_valid_channeling_context(struct cl_hw *cl_hw)
+{
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ u32 dflt_channel = 1;
+ u8 bw_limit = CHNL_BW_20;
+ char *band_str = "?";
+
+ if (cl_band_is_24g(cl_hw)) {
+ dflt_channel = 1;
+ bw_limit = CHNL_BW_40;
+ band_str = "24g";
+ } else if (cl_band_is_5g(cl_hw)) {
+ dflt_channel = 36;
+ bw_limit = CHNL_BW_160;
+ band_str = "5g";
+ } else {
+ dflt_channel = 1;
+ bw_limit = CHNL_BW_160;
+ band_str = "6g";
+ }
+
+ cl_tcv_set_default_channel(&conf->ci_chandef_channel, dflt_channel);
+ cl_tcv_set_default_bandwidth(&conf->ci_chandef_bandwidth, bw_limit);
+ cl_tcv_set_default_bandwidth(&conf->ci_cap_bandwidth, bw_limit);
+
+ /* Forcibly change BW limit for production mode in 24g */
+ if (cl_band_is_24g(cl_hw) && cl_hw->chip->conf->ce_production_mode)
+ bw_limit = CHNL_BW_160;
+
+ if (!cl_tcv_is_valid_bandwidth(&conf->ci_cap_bandwidth, bw_limit)) {
+ CL_DBG_ERROR(cl_hw, "Invalid channel bandwidth (%u) for %s\n",
+ conf->ci_cap_bandwidth, band_str);
+ return false;
+ }
+
+ return true;
+}
+
+static int cl_tcv_post_configuration(struct cl_hw *cl_hw, const char *buf)
+{
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ struct cl_chip *chip = cl_hw->chip;
+
+ if (conf->ci_max_bss_num > ARRAY_SIZE(cl_hw->addresses)) {
+ CL_DBG_ERROR(cl_hw, "Invalid ci_max_bss_num (%u)\n",
+ conf->ci_max_bss_num);
+ return -EINVAL;
+ }
+
+ /* Production mode */
+ if (chip->conf->ce_production_mode) {
+ memcpy(cl_hw->rx_sensitivity, conf->ci_rx_sensitivity_prod, MAX_ANTENNAS);
+ conf->ce_prot_mode = 0;
+ /* Production is done in station mode */
+ cl_hw_set_iface_conf(cl_hw, CL_IFCONF_STA);
+
+ } else {
+ if (chip->conf->ci_phy_dev == PHY_DEV_LOOPBACK) {
+ s8 rx_sens_loopback[MAX_ANTENNAS] = {-96, -96, -96, -96, -96, -96};
+
+ memcpy(cl_hw->rx_sensitivity, rx_sens_loopback, MAX_ANTENNAS);
+ } else {
+ memcpy(cl_hw->rx_sensitivity, conf->ci_rx_sensitivity_op, MAX_ANTENNAS);
+ }
+ }
+
+ if (cl_hw_set_antennas(cl_hw)) {
+ CL_DBG_ERROR(cl_hw, "hw set antennas failed!\n");
+ return -EINVAL;
+ }
+
+ if (!cl_tcv_is_valid_cca_config(cl_hw, conf))
+ return -EINVAL;
+
+ if (conf->ce_num_antennas) {
+ /* Validate: ce_num_antennas, ce_rx_nss, ce_tx_nss */
+ if (conf->ce_num_antennas < MIN_ANTENNAS ||
+ conf->ce_num_antennas > MAX_ANTENNAS) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_num_antennas (%u)\n",
+ conf->ce_num_antennas);
+ return -EINVAL;
+ }
+
+ if (conf->ce_rx_nss < 1 ||
+ conf->ce_rx_nss > WRS_SS_MAX ||
+ conf->ce_rx_nss > conf->ce_num_antennas) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_rx_nss (%u)\n", conf->ce_rx_nss);
+ return -EINVAL;
+ }
+
+ if (conf->ce_tx_nss < 1 ||
+ conf->ce_tx_nss > WRS_SS_MAX ||
+ conf->ce_tx_nss > conf->ce_num_antennas) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_tx_nss (%u)\n", conf->ce_tx_nss);
+ return -EINVAL;
+ }
+
+ /* Validate: ce_cck_tx_ant_mask and ce_cck_rx_ant_mask */
+ if (cl_band_is_24g(cl_hw)) {
+ u8 ant_shift = cl_hw_ant_shift(cl_hw);
+ u8 ant_bitmap = (((1 << conf->ce_num_antennas) - 1) << ant_shift);
+ u8 num_cck_ant_tx = hweight8(conf->ce_cck_tx_ant_mask);
+ u8 num_cck_ant_rx = hweight8(conf->ce_cck_rx_ant_mask);
+
+ if ((ant_bitmap & conf->ce_cck_tx_ant_mask) != conf->ce_cck_tx_ant_mask) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_cck_tx_ant_mask (0x%x), "
+ "does not match ce_num_antennas mask (0x%x)\n",
+ conf->ce_cck_tx_ant_mask, ant_bitmap);
+ return -EINVAL;
+ }
+
+ if ((ant_bitmap & conf->ce_cck_rx_ant_mask) != conf->ce_cck_rx_ant_mask) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_cck_rx_ant_mask (0x%x), "
+ "does not match ce_num_antennas mask (0x%x)\n",
+ conf->ce_cck_rx_ant_mask, ant_bitmap);
+ return -EINVAL;
+ }
+
+ if (conf->ce_cck_tx_ant_mask == 0) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_cck_tx_ant_mask, can't be 0x0\n");
+ return -EINVAL;
+ }
+
+ if (conf->ce_cck_rx_ant_mask == 0) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_cck_rx_ant_mask, can't be 0x0\n");
+ return -EINVAL;
+ }
+
+ if (num_cck_ant_tx > MAX_ANTENNAS_CCK) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_cck_tx_ant_mask (0x%x), "
+ "number of set bits exceeds %u\n",
+ num_cck_ant_tx, MAX_ANTENNAS_CCK);
+ return -EINVAL;
+ }
+
+ if (num_cck_ant_rx > MAX_ANTENNAS_CCK) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_cck_rx_ant_mask (0x%x), "
+ "number of set bits exceeds %u\n",
+ num_cck_ant_rx, MAX_ANTENNAS_CCK);
+ return -EINVAL;
+ }
+ }
+ }
+
+ if (conf->ce_prot_mode == TXL_PROT_RTS) {
+ CL_DBG_ERROR(cl_hw, "ce_prot_mode %u is not supported\n", TXL_PROT_RTS);
+ return -EINVAL;
+ }
+
+ if (!cl_tcv_is_valid_channeling_context(cl_hw))
+ return -EINVAL;
+
+ if (cl_band_is_5g(cl_hw)) {
+ if (!conf->ci_ofdm_only) {
+ CL_DBG_ERROR(cl_hw, "ci_ofdm_only must be set to 1 for 5g band\n");
+ return -EINVAL;
+ }
+ }
+
+ /* Validate ce_bcn_tx_path_min_time */
+ if (conf->ce_bcn_tx_path_min_time <= CL_TX_BCN_PENDING_CHAIN_MIN_TIME) {
+ CL_DBG_ERROR(cl_hw, "Invalid ce_bcn_tx_path_min_time (%u)\n",
+ conf->ce_bcn_tx_path_min_time);
+ return -EINVAL;
+ }
+
+ if (conf->ci_tx_sw_amsdu_max_packets > MAX_TX_SW_AMSDU_PACKET) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid ci_tx_sw_amsdu_max_packets (%u), set default (%u)\n",
+ conf->ci_tx_sw_amsdu_max_packets, MAX_TX_SW_AMSDU_PACKET);
+
+ conf->ci_tx_sw_amsdu_max_packets = MAX_TX_SW_AMSDU_PACKET;
+ }
+
+ if (conf->ce_tx_power_control > 100 || conf->ce_tx_power_control < 1) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid ce_tx_power_control (%u), set default 100\n",
+ conf->ce_tx_power_control);
+
+ conf->ce_tx_power_control = 100;
+ }
+
+ if (conf->ce_max_retry > CL_MAX_NUM_OF_RETRY) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid ce_max_retry (%u), set default to maximum (%u)\n",
+ conf->ce_max_retry, CL_MAX_NUM_OF_RETRY);
+
+ conf->ce_max_retry = CL_MAX_NUM_OF_RETRY;
+ }
+
+ if (!cl_tcv_is_valid_min_spacing(conf->ci_su_force_min_spacing)) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid ci_su_force_min_spacing (%u), must be 0/1/2/3/4/6/8/10/12/14/16/18/20/24, set default %u\n",
+ conf->ci_su_force_min_spacing, CL_TX_MPDU_SPACING_INVALID);
+
+ conf->ci_su_force_min_spacing = CL_TX_MPDU_SPACING_INVALID;
+ }
+
+ if (!cl_tcv_is_valid_min_spacing(conf->ci_mu_force_min_spacing)) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid ci_mu_force_min_spacing (%u), must be 0/1/2/3/4/6/8/10/12/14/16/18/20/24, set default %u\n",
+ conf->ci_mu_force_min_spacing, CL_TX_MPDU_SPACING_INVALID);
+
+ conf->ci_mu_force_min_spacing = CL_TX_MPDU_SPACING_INVALID;
+ }
+
+ if (conf->ci_max_mpdu_len != IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895 &&
+ conf->ci_max_mpdu_len != IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 &&
+ conf->ci_max_mpdu_len != IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid 'ci_max_mpdu_len' (%u). Must be 0/1/2. Setting to 0\n",
+ conf->ci_max_mpdu_len);
+
+ conf->ci_max_mpdu_len = IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895;
+ }
+
+ if (cl_hw_is_tcv1(cl_hw) && cl_chip_is_both_enabled(chip)) {
+ /* Check that sum of ce_num_antennas in both TCV's is smaller than max_antennas */
+ struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
+ u8 num_ant_tcv0 = cl_hw_tcv0->conf->ce_num_antennas;
+ u8 num_ant_tcv1 = conf->ce_num_antennas;
+ u8 total_ant = num_ant_tcv0 + num_ant_tcv1;
+
+ if (total_ant > chip->max_antennas) {
+ CL_DBG_ERROR(cl_hw,
+ "Invalid ce_num_antennas tcv0=%u, tcv1=%u, total=%u, max=%u\n",
+ num_ant_tcv0, num_ant_tcv1, total_ant, chip->max_antennas);
+ return -1;
+ }
+ }
+
+ if (cl_hw_is_prod_or_listener(cl_hw) && !conf->ce_power_offset_prod_en) {
+ cl_dbg_err(cl_hw, "Disable PPMCS/PPBW in production mode\n");
+
+ if (cl_band_is_6g(cl_hw)) {
+ memset(conf->ce_ppmcs_offset_he_6g, 0,
+ sizeof(conf->ce_ppmcs_offset_he_6g));
+ } else if (cl_band_is_5g(cl_hw)) {
+ memset(conf->ce_ppmcs_offset_he_36_64, 0,
+ sizeof(conf->ce_ppmcs_offset_he_36_64));
+ memset(conf->ce_ppmcs_offset_he_100_140, 0,
+ sizeof(conf->ce_ppmcs_offset_he_100_140));
+ memset(conf->ce_ppmcs_offset_he_149_165, 0,
+ sizeof(conf->ce_ppmcs_offset_he_149_165));
+ memset(conf->ce_ppmcs_offset_ht_vht_36_64, 0,
+ sizeof(conf->ce_ppmcs_offset_ht_vht_36_64));
+ memset(conf->ce_ppmcs_offset_ht_vht_100_140, 0,
+ sizeof(conf->ce_ppmcs_offset_ht_vht_100_140));
+ memset(conf->ce_ppmcs_offset_ht_vht_149_165, 0,
+ sizeof(conf->ce_ppmcs_offset_ht_vht_149_165));
+ memset(conf->ce_ppmcs_offset_ofdm_36_64, 0,
+ sizeof(conf->ce_ppmcs_offset_ofdm_36_64));
+ memset(conf->ce_ppmcs_offset_ofdm_100_140, 0,
+ sizeof(conf->ce_ppmcs_offset_ofdm_100_140));
+ memset(conf->ce_ppmcs_offset_ofdm_149_165, 0,
+ sizeof(conf->ce_ppmcs_offset_ofdm_149_165));
+ } else {
+ memset(conf->ce_ppmcs_offset_he, 0, sizeof(conf->ce_ppmcs_offset_he));
+ memset(conf->ce_ppmcs_offset_ht, 0, sizeof(conf->ce_ppmcs_offset_ht));
+ memset(conf->ce_ppmcs_offset_ofdm, 0, sizeof(conf->ce_ppmcs_offset_ofdm));
+ memset(conf->ce_ppmcs_offset_cck, 0, sizeof(conf->ce_ppmcs_offset_cck));
+ }
+
+ memset(conf->ce_ppbw_offset, 0, sizeof(conf->ce_ppbw_offset));
+ }
+
+ if (!cl_band_is_24g(cl_hw) && cl_hw->conf->ci_signal_extension_en) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid 'ci_signal_extension_en' (%u). Must be 0 for non 2.4Ghz band. Setting to 0\n",
+ conf->ce_dyn_mcast_rate_en);
+
+ conf->ci_signal_extension_en = false;
+ }
+
+ if (conf->ce_dyn_mcast_rate_en && cl_band_is_6g(cl_hw)) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid 'ce_dyn_mcast_rate_en' (%u). Must be 0 on 6Ghz band. Setting to 0\n",
+ conf->ce_dyn_mcast_rate_en);
+
+ conf->ce_dyn_mcast_rate_en = 0;
+ }
+
+ if (conf->ce_dyn_bcast_rate_en && cl_band_is_6g(cl_hw)) {
+ cl_dbg_err(cl_hw, "ERROR: Invalid 'ce_dyn_bcast_rate_en' (%u). Must be 0 on 6Ghz band. Setting to 0\n",
+ conf->ce_dyn_bcast_rate_en);
+
+ conf->ce_dyn_bcast_rate_en = 0;
+ }
+
+ return 0;
+}
+
+int cl_tcv_config_read(struct cl_hw *cl_hw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ char *buf = NULL;
+ size_t size = 0;
+ int ret = 0;
+ char filename[CL_FILENAME_MAX] = {0};
+ u8 tcv_idx = cl_hw->idx;
+
+ snprintf(filename, sizeof(filename), "cl_tcv%u.dat", tcv_idx);
+ pr_debug("%s: %s\n", __func__, filename);
+ size = cl_file_open_and_read(chip, filename, &buf);
+
+ if (!buf) {
+ pr_err("read %s failed !!!\n", filename);
+ return -ENODATA;
+ }
+
+ ret = cl_tcv_set_all_params_from_buf(cl_hw, buf, size);
+ if (ret) {
+ kfree(buf);
+ return ret;
+ }
+
+ ret = cl_tcv_post_configuration(cl_hw, NULL);
+ if (ret) {
+ kfree(buf);
+ return ret;
+ }
+
+ kfree(buf);
+
+ return ret;
+}
+
+int cl_tcv_config_alloc(struct cl_hw *cl_hw)
+{
+ cl_hw->conf = kzalloc(sizeof(*cl_hw->conf), GFP_KERNEL);
+
+ if (!cl_hw->conf)
+ return -ENOMEM;
+
+ /* Copy default values */
+ memcpy(cl_hw->conf, &conf, sizeof(struct cl_tcv_conf));
+
+ return 0;
+}
+
+void cl_tcv_config_free(struct cl_hw *cl_hw)
+{
+ kfree(cl_hw->conf);
+ cl_hw->conf = NULL;
+}
+
+void cl_tcv_config_validate_calib_params(struct cl_hw *cl_hw)
+{
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ u8 chain = 0;
+
+ if (cl_hw->chip->conf->ci_phy_dev == PHY_DEV_ATHOS) {
+ if (cl_hw->chip->rfic_version == ATHOS_B_VER) {
+ for (chain = 0; chain < MAX_ANTENNAS; chain++) {
+ if (conf->ci_calib_tx_init_rx_gain[chain] == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_tx_init_rx_gain[chain] =
+ CALIB_RX_GAIN_DEFAULT_ATHOS_B;
+ if (conf->ci_calib_rx_init_rx_gain[chain] == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_rx_init_rx_gain[chain] =
+ CALIB_RX_GAIN_DEFAULT_ATHOS_B;
+ }
+
+ if (conf->ci_calib_conf_rx_gain_upper_limit == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_conf_rx_gain_upper_limit =
+ CALIB_RX_GAIN_UPPER_LIMIT_ATHOS_B;
+ if (conf->ci_calib_conf_rx_gain_lower_limit == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_conf_rx_gain_lower_limit =
+ CALIB_RX_GAIN_LOWER_LIMIT_ATHOS_B;
+ } else {
+ for (chain = 0; chain < MAX_ANTENNAS; chain++) {
+ if (conf->ci_calib_tx_init_rx_gain[chain] == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_tx_init_rx_gain[chain] =
+ CALIB_RX_GAIN_DEFAULT_ATHOS;
+ if (conf->ci_calib_rx_init_rx_gain[chain] == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_rx_init_rx_gain[chain] =
+ CALIB_RX_GAIN_DEFAULT_ATHOS;
+ }
+
+ if (conf->ci_calib_conf_rx_gain_upper_limit == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_conf_rx_gain_upper_limit =
+ CALIB_RX_GAIN_UPPER_LIMIT_ATHOS;
+ if (conf->ci_calib_conf_rx_gain_lower_limit == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_conf_rx_gain_lower_limit =
+ CALIB_RX_GAIN_LOWER_LIMIT_ATHOS;
+ }
+ } else {
+ for (chain = 0; chain < MAX_ANTENNAS; chain++) {
+ if (conf->ci_calib_tx_init_rx_gain[chain] == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_tx_init_rx_gain[chain] = CALIB_RX_GAIN_DEFAULT;
+
+ if (conf->ci_calib_rx_init_rx_gain[chain] == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_rx_init_rx_gain[chain] = CALIB_RX_GAIN_DEFAULT;
+ }
+
+ if (conf->ci_calib_conf_rx_gain_upper_limit == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_conf_rx_gain_upper_limit = CALIB_RX_GAIN_UPPER_LIMIT;
+ if (conf->ci_calib_conf_rx_gain_lower_limit == INVALID_CALIB_RX_GAIN)
+ conf->ci_calib_conf_rx_gain_lower_limit = CALIB_RX_GAIN_LOWER_LIMIT;
+ }
+}
--
2.36.1


2022-05-24 16:29:33

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 02/96] celeno: add Makefile

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/Makefile | 2 ++
1 file changed, 2 insertions(+)
create mode 100755 drivers/net/wireless/celeno/Makefile

diff --git a/drivers/net/wireless/celeno/Makefile b/drivers/net/wireless/celeno/Makefile
new file mode 100755
index 000000000000..b7a44afcb867
--- /dev/null
+++ b/drivers/net/wireless/celeno/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+obj-$(CONFIG_CL8K) += cl8k/
--
2.36.1


2022-05-24 16:34:04

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 65/96] cl8k: add reg/reg_defs.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
.../net/wireless/celeno/cl8k/reg/reg_defs.h | 5494 +++++++++++++++++
1 file changed, 5494 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/reg/reg_defs.h

diff --git a/drivers/net/wireless/celeno/cl8k/reg/reg_defs.h b/drivers/net/wireless/celeno/cl8k/reg/reg_defs.h
new file mode 100644
index 000000000000..6dd137629ab5
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/reg/reg_defs.h
@@ -0,0 +1,5494 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_REG_DEFS_H
+#define CL_REG_DEFS_H
+
+#include <linux/types.h>
+#include "reg/reg_access.h"
+#include "hw.h"
+#include "chip.h"
+
+#define CEVA_MCCI_BASE_ADDR 0x004B0000
+
+#define CEVA_CORE_VERSION_ADDR (CEVA_MCCI_BASE_ADDR + 0x1000)
+#define CEVA_CORE_ID_ADDR (CEVA_MCCI_BASE_ADDR + 0x1004)
+
+#define CEVA_CPM_BASE_ADDR (CEVA_MCCI_BASE_ADDR + 0x2000)
+
+/* PDMA */
+#define CEVA_CPM_PDEA_REG (CEVA_CPM_BASE_ADDR + 0x0008) /* External */
+#define CEVA_CPM_PDIA_REG (CEVA_CPM_BASE_ADDR + 0x000C) /* Internal */
+#define CEVA_CPM_PDTC_REG (CEVA_CPM_BASE_ADDR + 0x0010) /* Control */
+
+/* DDMA */
+#define CEVA_CPM_DDEA_REG (CEVA_CPM_BASE_ADDR + 0x001c) /* External */
+#define CEVA_CPM_DDIA_REG (CEVA_CPM_BASE_ADDR + 0x0020) /* Internal */
+#define CEVA_CPM_DDTC_REG (CEVA_CPM_BASE_ADDR + 0x0024) /* Control */
+
+#define CEVA_CPM_DDTC_WRITE_COMMAND 0x40000000
+
+/* Translated internally to 0x60600000. */
+#define CEVA_SHARED_PMEM_BASE_ADDR_FROM_HOST 0x004C0000
+
+/* Internal address to access Shared */
+#define CEVA_SHARED_PMEM_BASE_ADDR_INTERNAL 0x60600000 /* PMEM */
+#define CEVA_SHARED_XMEM_BASE_ADDR_INTERNAL 0x60900000 /* XMEM */
+
+#define CEVA_DSP_DATA_SIZE 0x80000 /* 512kB. */
+#define CEVA_DSP_EXT_DATA_SIZE 0x60000 /* 384kB. */
+
+/* 512kb */
+#define CEVA_INTERNAL_PMEM_SIZE 0x80000
+
+#define CEVA_SHARED_PMEM_SIZE 0x20000 /* 128kb */
+#define CEVA_SHARED_XMEM_SIZE 0x60000 /* 384kB */
+
+#define CEVA_MAX_PAGES (CEVA_INTERNAL_PMEM_SIZE / CEVA_SHARED_PMEM_SIZE)
+
+#define CEVA_SHARED_PMEM_CACHE_SIZE 0x8000 /* 32kb */
+
+#define REG_CMU_BASE_ADDR 0x007C6000
+
+/*
+ * @brief CMU_CLK_EN register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 spare_afe_gnrl_en 0
+ * 30 spare_sys_gnrl_en 0
+ * 27 spare_riu44_clk_en 0
+ * 26 spare_riu_clk_en 0
+ * 25 spare_riu2x_clk_en 0
+ * 24 spare_riu4x_clk_en 0
+ * 23 spare_phy_clk_en 0
+ * 22 spare_phy2x_clk_en 0
+ * 21 spare_sysx_clk_en 0
+ * 20 spare_sys2x_clk_en 0
+ * 19 ricu_clk_en 0
+ * 05 smac_proc_clk_en 1
+ * 04 umac_proc_clk_en 1
+ * 03 lmac_proc_clk_en 1
+ * </pre>
+ */
+#define CMU_CLK_EN_ADDR (REG_CMU_BASE_ADDR + 0x00000000)
+#define CMU_CLK_EN_OFFSET 0x00000000
+#define CMU_CLK_EN_INDEX 0x00000000
+#define CMU_CLK_EN_RESET 0x00000038
+
+static inline void cmu_clk_en_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, CMU_CLK_EN_ADDR, value);
+}
+
+/* Field definitions */
+#define CMU_SPARE_AFE_GNRL_EN_BIT ((u32)0x80000000)
+#define CMU_SPARE_AFE_GNRL_EN_POS 31
+#define CMU_SPARE_SYS_GNRL_EN_BIT ((u32)0x40000000)
+#define CMU_SPARE_SYS_GNRL_EN_POS 30
+#define CMU_SPARE_RIU_44_CLK_EN_BIT ((u32)0x08000000)
+#define CMU_SPARE_RIU_44_CLK_EN_POS 27
+#define CMU_SPARE_RIU_CLK_EN_BIT ((u32)0x04000000)
+#define CMU_SPARE_RIU_CLK_EN_POS 26
+#define CMU_SPARE_RIU_2_X_CLK_EN_BIT ((u32)0x02000000)
+#define CMU_SPARE_RIU_2_X_CLK_EN_POS 25
+#define CMU_SPARE_RIU_4_X_CLK_EN_BIT ((u32)0x01000000)
+#define CMU_SPARE_RIU_4_X_CLK_EN_POS 24
+#define CMU_SPARE_PHY_CLK_EN_BIT ((u32)0x00800000)
+#define CMU_SPARE_PHY_CLK_EN_POS 23
+#define CMU_SPARE_PHY_2_X_CLK_EN_BIT ((u32)0x00400000)
+#define CMU_SPARE_PHY_2_X_CLK_EN_POS 22
+#define CMU_SPARE_SYSX_CLK_EN_BIT ((u32)0x00200000)
+#define CMU_SPARE_SYSX_CLK_EN_POS 21
+#define CMU_SPARE_SYS_2_X_CLK_EN_BIT ((u32)0x00100000)
+#define CMU_SPARE_SYS_2_X_CLK_EN_POS 20
+#define CMU_RICU_CLK_EN_BIT ((u32)0x00080000)
+#define CMU_RICU_CLK_EN_POS 19
+#define CMU_SMAC_PROC_CLK_EN_BIT ((u32)0x00000020)
+#define CMU_SMAC_PROC_CLK_EN_POS 5
+#define CMU_UMAC_PROC_CLK_EN_BIT ((u32)0x00000010)
+#define CMU_UMAC_PROC_CLK_EN_POS 4
+#define CMU_LMAC_PROC_CLK_EN_BIT ((u32)0x00000008)
+#define CMU_LMAC_PROC_CLK_EN_POS 3
+
+#define CMU_MAC_ALL_CLK_EN (CMU_RICU_CLK_EN_BIT | \
+ CMU_SMAC_PROC_CLK_EN_BIT | \
+ CMU_UMAC_PROC_CLK_EN_BIT | \
+ CMU_LMAC_PROC_CLK_EN_BIT)
+
+/*
+ * @brief CMU_PHY_0_CLK_EN register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 02 ceva0_clk_en 0
+ * 01 phy0_apb_clk_en 0
+ * 00 phy0_main_clk_en 0
+ * </pre>
+ */
+#define CMU_PHY_0_CLK_EN_ADDR (REG_CMU_BASE_ADDR + 0x00000004)
+#define CMU_PHY_0_CLK_EN_OFFSET 0x00000004
+#define CMU_PHY_0_CLK_EN_INDEX 0x00000001
+#define CMU_PHY_0_CLK_EN_RESET 0x00000000
+
+static inline void cmu_phy_0_clk_en_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, CMU_PHY_0_CLK_EN_ADDR, value);
+}
+
+/* Field definitions */
+#define CMU_CEVA_0_CLK_EN_BIT ((u32)0x00000004)
+#define CMU_CEVA_0_CLK_EN_POS 2
+#define CMU_PHY_0_APB_CLK_EN_BIT ((u32)0x00000002)
+#define CMU_PHY_0_APB_CLK_EN_POS 1
+#define CMU_PHY_0_MAIN_CLK_EN_BIT ((u32)0x00000001)
+#define CMU_PHY_0_MAIN_CLK_EN_POS 0
+
+#define CMU_PHY0_CLK_EN (CMU_CEVA_0_CLK_EN_BIT | \
+ CMU_PHY_0_APB_CLK_EN_BIT | \
+ CMU_PHY_0_MAIN_CLK_EN_BIT)
+
+static inline void cmu_phy_0_clk_en_pack(struct cl_chip *chip, u8 ceva0clken,
+ u8 phy0apbclken, u8 phy0mainclken)
+{
+ ASSERT_ERR_CHIP((((u32)ceva0clken << 2) & ~((u32)0x00000004)) == 0);
+ ASSERT_ERR_CHIP((((u32)phy0apbclken << 1) & ~((u32)0x00000002)) == 0);
+ ASSERT_ERR_CHIP((((u32)phy0mainclken << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_0_CLK_EN_ADDR,
+ ((u32)ceva0clken << 2) | ((u32)phy0apbclken << 1) |
+ ((u32)phy0mainclken << 0));
+}
+
+static inline void cmu_phy_0_clk_en_ceva_0_clk_en_setf(struct cl_chip *chip, u8 ceva0clken)
+{
+ ASSERT_ERR_CHIP((((u32)ceva0clken << 2) & ~((u32)0x00000004)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_0_CLK_EN_ADDR,
+ (cl_reg_read_chip(chip, CMU_PHY_0_CLK_EN_ADDR) & ~((u32)0x00000004)) |
+ ((u32)ceva0clken << 2));
+}
+
+static inline void cmu_phy_0_clk_en_phy_0_apb_clk_en_setf(struct cl_chip *chip, u8 phy0apbclken)
+{
+ ASSERT_ERR_CHIP((((u32)phy0apbclken << 1) & ~((u32)0x00000002)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_0_CLK_EN_ADDR,
+ (cl_reg_read_chip(chip, CMU_PHY_0_CLK_EN_ADDR) & ~((u32)0x00000002)) |
+ ((u32)phy0apbclken << 1));
+}
+
+/*
+ * @brief CMU_PHY_1_CLK_EN register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 02 ceva1_clk_en 0
+ * 01 phy1_apb_clk_en 0
+ * 00 phy1_main_clk_en 0
+ * </pre>
+ */
+#define CMU_PHY_1_CLK_EN_ADDR (REG_CMU_BASE_ADDR + 0x00000008)
+#define CMU_PHY_1_CLK_EN_OFFSET 0x00000008
+#define CMU_PHY_1_CLK_EN_INDEX 0x00000002
+#define CMU_PHY_1_CLK_EN_RESET 0x00000000
+
+static inline void cmu_phy_1_clk_en_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, CMU_PHY_1_CLK_EN_ADDR, value);
+}
+
+/* Field definitions */
+#define CMU_CEVA_1_CLK_EN_BIT ((u32)0x00000004)
+#define CMU_CEVA_1_CLK_EN_POS 2
+#define CMU_PHY_1_APB_CLK_EN_BIT ((u32)0x00000002)
+#define CMU_PHY_1_APB_CLK_EN_POS 1
+#define CMU_PHY_1_MAIN_CLK_EN_BIT ((u32)0x00000001)
+#define CMU_PHY_1_MAIN_CLK_EN_POS 0
+
+#define CMU_PHY1_CLK_EN (CMU_CEVA_1_CLK_EN_BIT | \
+ CMU_PHY_1_APB_CLK_EN_BIT | \
+ CMU_PHY_1_MAIN_CLK_EN_BIT)
+
+static inline void cmu_phy_1_clk_en_pack(struct cl_chip *chip, u8 ceva1clken,
+ u8 phy1apbclken, u8 phy1mainclken)
+{
+ ASSERT_ERR_CHIP((((u32)ceva1clken << 2) & ~((u32)0x00000004)) == 0);
+ ASSERT_ERR_CHIP((((u32)phy1apbclken << 1) & ~((u32)0x00000002)) == 0);
+ ASSERT_ERR_CHIP((((u32)phy1mainclken << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_1_CLK_EN_ADDR,
+ ((u32)ceva1clken << 2) | ((u32)phy1apbclken << 1) |
+ ((u32)phy1mainclken << 0));
+}
+
+static inline void cmu_phy_1_clk_en_ceva_1_clk_en_setf(struct cl_chip *chip, u8 ceva1clken)
+{
+ ASSERT_ERR_CHIP((((u32)ceva1clken << 2) & ~((u32)0x00000004)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_1_CLK_EN_ADDR,
+ (cl_reg_read_chip(chip, CMU_PHY_1_CLK_EN_ADDR) & ~((u32)0x00000004)) |
+ ((u32)ceva1clken << 2));
+}
+
+static inline void cmu_phy_1_clk_en_phy_1_apb_clk_en_setf(struct cl_chip *chip, u8 phy1apbclken)
+{
+ ASSERT_ERR_CHIP((((u32)phy1apbclken << 1) & ~((u32)0x00000002)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_1_CLK_EN_ADDR,
+ (cl_reg_read_chip(chip, CMU_PHY_1_CLK_EN_ADDR) & ~((u32)0x00000002)) |
+ ((u32)phy1apbclken << 1));
+}
+
+/*
+ * @brief CMU_CONTROL register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 gl_mux_sel 0
+ * </pre>
+ */
+#define CMU_CONTROL_ADDR (REG_CMU_BASE_ADDR + 0x0000000C)
+#define CMU_CONTROL_OFFSET 0x0000000C
+#define CMU_CONTROL_INDEX 0x00000003
+#define CMU_CONTROL_RESET 0x00000000
+
+static inline void cmu_control_gl_mux_sel_setf(struct cl_chip *chip, u8 glmuxsel)
+{
+ ASSERT_ERR_CHIP((((u32)glmuxsel << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, CMU_CONTROL_ADDR, (u32)glmuxsel << 0);
+}
+
+/*
+ * @brief CMU_RST register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 18 spare_riu44_reset_n 0
+ * 17 spare_modem_reset_n 0
+ * 16 spare_sys_reset_n 0
+ * 15 n_RICURst 1
+ * </pre>
+ */
+#define CMU_RST_ADDR (REG_CMU_BASE_ADDR + 0x00000010)
+#define CMU_RST_OFFSET 0x00000010
+#define CMU_RST_INDEX 0x00000004
+#define CMU_RST_RESET 0x0000FF80
+
+static inline void cmu_rst_n_ricurst_setf(struct cl_chip *chip, u8 nricurst)
+{
+ ASSERT_ERR_CHIP((((u32)nricurst << 15) & ~((u32)0x00008000)) == 0);
+ cl_reg_write_chip(chip, CMU_RST_ADDR,
+ (cl_reg_read_chip(chip, CMU_RST_ADDR) & ~((u32)0x00008000)) |
+ ((u32)nricurst << 15));
+}
+
+/*
+ * @brief CMU_PHY_0_RST register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 03 ceva0_global_rst_n 1
+ * 02 mpif0_rst_n 1
+ * 01 phy0_preset_n 1
+ * 00 phy0_rst_n 1
+ * </pre>
+ */
+#define CMU_PHY_0_RST_ADDR (REG_CMU_BASE_ADDR + 0x00000014)
+#define CMU_PHY_0_RST_OFFSET 0x00000014
+#define CMU_PHY_0_RST_INDEX 0x00000005
+#define CMU_PHY_0_RST_RESET 0x0000000F
+
+static inline void cmu_phy_0_rst_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, CMU_PHY_0_RST_ADDR, value);
+}
+
+/* Field definitions */
+#define CMU_CEVA_0_GLOBAL_RST_N_BIT ((u32)0x00000008)
+#define CMU_CEVA_0_GLOBAL_RST_N_POS 3
+#define CMU_MPIF_0_RST_N_BIT ((u32)0x00000004)
+#define CMU_MPIF_0_RST_N_POS 2
+#define CMU_PHY_0_PRESET_N_BIT ((u32)0x00000002)
+#define CMU_PHY_0_PRESET_N_POS 1
+#define CMU_PHY_0_RST_N_BIT ((u32)0x00000001)
+#define CMU_PHY_0_RST_N_POS 0
+
+#define CMU_PHY0_RST_EN (CMU_PHY_0_PRESET_N_BIT | \
+ CMU_MPIF_0_RST_N_BIT | \
+ CMU_PHY_0_RST_N_BIT | \
+ CMU_CEVA_0_GLOBAL_RST_N_BIT)
+
+static inline void cmu_phy_0_rst_ceva_0_global_rst_n_setf(struct cl_chip *chip, u8 ceva0globalrstn)
+{
+ ASSERT_ERR_CHIP((((u32)ceva0globalrstn << 3) & ~((u32)0x00000008)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_0_RST_ADDR,
+ (cl_reg_read_chip(chip, CMU_PHY_0_RST_ADDR) & ~((u32)0x00000008)) |
+ ((u32)ceva0globalrstn << 3));
+}
+
+/*
+ * @brief CMU_PHY_1_RST register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 03 ceva1_global_rst_n 1
+ * 02 mpif1_rst_n 1
+ * 01 phy1_preset_n 1
+ * 00 phy1_rst_n 1
+ * </pre>
+ */
+#define CMU_PHY_1_RST_ADDR (REG_CMU_BASE_ADDR + 0x00000018)
+#define CMU_PHY_1_RST_OFFSET 0x00000018
+#define CMU_PHY_1_RST_INDEX 0x00000006
+#define CMU_PHY_1_RST_RESET 0x0000000F
+
+static inline void cmu_phy_1_rst_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, CMU_PHY_1_RST_ADDR, value);
+}
+
+/* Field definitions */
+#define CMU_CEVA_1_GLOBAL_RST_N_BIT ((u32)0x00000008)
+#define CMU_CEVA_1_GLOBAL_RST_N_POS 3
+#define CMU_MPIF_1_RST_N_BIT ((u32)0x00000004)
+#define CMU_MPIF_1_RST_N_POS 2
+#define CMU_PHY_1_PRESET_N_BIT ((u32)0x00000002)
+#define CMU_PHY_1_PRESET_N_POS 1
+#define CMU_PHY_1_RST_N_BIT ((u32)0x00000001)
+#define CMU_PHY_1_RST_N_POS 0
+
+#define CMU_PHY1_RST_EN (CMU_PHY_1_PRESET_N_BIT | \
+ CMU_MPIF_1_RST_N_BIT | \
+ CMU_PHY_1_RST_N_BIT | \
+ CMU_CEVA_1_GLOBAL_RST_N_BIT)
+
+static inline void cmu_phy_1_rst_ceva_1_global_rst_n_setf(struct cl_chip *chip, u8 ceva1globalrstn)
+{
+ ASSERT_ERR_CHIP((((u32)ceva1globalrstn << 3) & ~((u32)0x00000008)) == 0);
+ cl_reg_write_chip(chip, CMU_PHY_1_RST_ADDR,
+ (cl_reg_read_chip(chip, CMU_PHY_1_RST_ADDR) & ~((u32)0x00000008)) |
+ ((u32)ceva1globalrstn << 3));
+}
+
+/*
+ * @brief CMU_PLL_0_STAT register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 pll_lock 0
+ * </pre>
+ */
+#define CMU_PLL_0_STAT_ADDR (REG_CMU_BASE_ADDR + 0x00000040)
+#define CMU_PLL_0_STAT_OFFSET 0x00000040
+#define CMU_PLL_0_STAT_INDEX 0x00000010
+#define CMU_PLL_0_STAT_RESET 0x00000000
+
+static inline u8 cmu_pll_0_stat_pll_lock_getf(struct cl_chip *chip)
+{
+ u32 local_val = cl_reg_read_chip(chip, CMU_PLL_0_STAT_ADDR);
+
+ ASSERT_ERR_CHIP((local_val & ~((u32)0x80000000)) == 0);
+ return (local_val >> 31);
+}
+
+/*
+ * @brief CMU_PLL_1_STAT register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 pll_lock 0
+ * </pre>
+ */
+#define CMU_PLL_1_STAT_ADDR (REG_CMU_BASE_ADDR + 0x00000050)
+#define CMU_PLL_1_STAT_OFFSET 0x00000050
+#define CMU_PLL_1_STAT_INDEX 0x00000014
+#define CMU_PLL_1_STAT_RESET 0x00000000
+
+static inline u8 cmu_pll_1_stat_pll_lock_getf(struct cl_chip *chip)
+{
+ u32 local_val = cl_reg_read_chip(chip, CMU_PLL_1_STAT_ADDR);
+
+ ASSERT_ERR_CHIP((local_val & ~((u32)0x80000000)) == 0);
+ return (local_val >> 31);
+}
+
+/*
+ * @brief CMU_PHASE_SEL register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 20 gp_clk_phase_sel 1
+ * 19 dac_cdb_clk_phase_sel 0
+ * 18 adc_cdb_clk_phase_sel 0
+ * 17 dac_clk_phase_sel 0
+ * 16 adc_clk_phase_sel 0
+ * </pre>
+ */
+#define CMU_PHASE_SEL_ADDR (REG_CMU_BASE_ADDR + 0x00000060)
+#define CMU_PHASE_SEL_OFFSET 0x00000060
+#define CMU_PHASE_SEL_INDEX 0x00000018
+#define CMU_PHASE_SEL_RESET 0x00100000
+
+static inline void cmu_phase_sel_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, CMU_PHASE_SEL_ADDR, value);
+}
+
+/* Field definitions */
+#define CMU_GP_CLK_PHASE_SEL_BIT ((u32)0x00100000)
+#define CMU_GP_CLK_PHASE_SEL_POS 20
+#define CMU_DAC_CDB_CLK_PHASE_SEL_BIT ((u32)0x00080000)
+#define CMU_DAC_CDB_CLK_PHASE_SEL_POS 19
+#define CMU_ADC_CDB_CLK_PHASE_SEL_BIT ((u32)0x00040000)
+#define CMU_ADC_CDB_CLK_PHASE_SEL_POS 18
+#define CMU_DAC_CLK_PHASE_SEL_BIT ((u32)0x00020000)
+#define CMU_DAC_CLK_PHASE_SEL_POS 17
+#define CMU_ADC_CLK_PHASE_SEL_BIT ((u32)0x00010000)
+#define CMU_ADC_CLK_PHASE_SEL_POS 16
+
+struct cl_fem_lna_enable_gpio {
+ union {
+ struct {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u16 b0 : 1,
+ b1 : 1,
+ b2 : 1,
+ b3 : 1,
+ b4 : 1,
+ b5 : 1,
+ b6 : 1,
+ b7 : 1,
+ b8 : 1,
+ b9 : 1,
+ b10 : 1,
+ b11 : 1,
+ rsv : 4;
+#else /* __BIG_ENDIAN_BITFIELD */
+ u16 rsv : 4,
+ b11 : 1,
+ b10 : 1,
+ b9 : 1,
+ b8 : 1,
+ b7 : 1,
+ b6 : 1,
+ b5 : 1,
+ b4 : 1,
+ b3 : 1,
+ b2 : 1,
+ b1 : 1,
+ b0 : 1;
+#endif
+ } bits;
+ u16 val;
+ };
+};
+
+struct cl_fem_pa_enable_gpio {
+ union {
+ struct {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u16 b0 : 1,
+ b1 : 1,
+ b2 : 1,
+ b3 : 1,
+ b4 : 1,
+ b5 : 1,
+ b6 : 1,
+ b7 : 1,
+ b8 : 1,
+ b9 : 1,
+ b10 : 1,
+ b11 : 1,
+ rsv : 4;
+#else /* __BIG_ENDIAN_BITFIELD */
+ u16 rsv : 4,
+ b11 : 1,
+ b10 : 1,
+ b9 : 1,
+ b8 : 1,
+ b7 : 1,
+ b6 : 1,
+ b5 : 1,
+ b4 : 1,
+ b3 : 1,
+ b2 : 1,
+ b1 : 1,
+ b0 : 1;
+#endif
+ } bits;
+ u16 val;
+ };
+};
+
+struct cl_fem_rx_active_gpio {
+ union {
+ struct {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u8 b0 : 1,
+ b1 : 1,
+ b2 : 1,
+ b3 : 1,
+ b4 : 1,
+ b5 : 1,
+ b6 : 1,
+ b7 : 1;
+#else /* __BIG_ENDIAN_BITFIELD */
+ u8 b7 : 1,
+ b6 : 1,
+ b5 : 1,
+ b4 : 1,
+ b3 : 1,
+ b2 : 1,
+ b1 : 1,
+ b0 : 1;
+#endif
+ } bits;
+ u8 val;
+ };
+};
+
+#define EXTRACT_BYPASS_LUT(lut) ((lut) & 0x7)
+#define FEM_LUT_MASK 0x7777
+
+#define PA_ENABLE_POS 0
+#define LNA_ENABLE_POS 1
+#define RX_ACTIVE_POS 2
+#define GET_BIT(reg, pos) (((reg) >> (pos)) & 0x1)
+
+#define LNA_ENABLE_GPIO_OUT_CFG(val) \
+ (((1 << IO_CTRL_LNA_ENABLE_0_GPIO_ENABLE_POS) & IO_CTRL_LNA_ENABLE_0_GPIO_ENABLE_BIT) | \
+ ((1 << IO_CTRL_LNA_ENABLE_0_GPIO_OE_POS) & IO_CTRL_LNA_ENABLE_0_GPIO_OE_BIT) | \
+ (((u32)(val) << IO_CTRL_LNA_ENABLE_0_GPIO_OUT_POS) & IO_CTRL_LNA_ENABLE_0_GPIO_OUT_BIT))
+#define PA_ENABLE_GPIO_OUT_CFG(val) \
+ (((1 << IO_CTRL_PA_ENABLE_0_GPIO_ENABLE_POS) & IO_CTRL_PA_ENABLE_0_GPIO_ENABLE_BIT) | \
+ ((1 << IO_CTRL_PA_ENABLE_0_GPIO_OE_POS) & IO_CTRL_PA_ENABLE_0_GPIO_OE_BIT) | \
+ (((u32)(val) << IO_CTRL_PA_ENABLE_0_GPIO_OUT_POS) & IO_CTRL_PA_ENABLE_0_GPIO_OUT_BIT))
+#define RX_ACTIVE_GPIO_OUT_CFG(val) \
+ (((1 << IO_CTRL_RX_ACTIVE_0_GPIO_ENABLE_POS) & IO_CTRL_RX_ACTIVE_0_GPIO_ENABLE_BIT) | \
+ ((1 << IO_CTRL_RX_ACTIVE_0_GPIO_OE_POS) & IO_CTRL_RX_ACTIVE_0_GPIO_OE_BIT) | \
+ (((u32)(val) << IO_CTRL_RX_ACTIVE_0_GPIO_OUT_POS) & IO_CTRL_RX_ACTIVE_0_GPIO_OUT_BIT))
+
+#define REG_IPC_BASE_ADDR 0x007C4000
+
+/*
+ * @brief XMAC_2_HOST_RAW_STATUS register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 xmac2host_raw_status 0x0
+ * </pre>
+ */
+#define IPC_XMAC_2_HOST_RAW_STATUS_ADDR (REG_IPC_BASE_ADDR + 0x00000004)
+#define IPC_XMAC_2_HOST_RAW_STATUS_OFFSET 0x00000004
+#define IPC_XMAC_2_HOST_RAW_STATUS_INDEX 0x00000001
+#define IPC_XMAC_2_HOST_RAW_STATUS_RESET 0x00000000
+
+static inline u32 ipc_xmac_2_host_raw_status_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, IPC_XMAC_2_HOST_RAW_STATUS_ADDR);
+}
+
+/*
+ * @brief XMAC_2_HOST_ACK register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 xmac2host_trigger_clr 0x0
+ * </pre>
+ */
+#define IPC_XMAC_2_HOST_ACK_ADDR (REG_IPC_BASE_ADDR + 0x00000008)
+#define IPC_XMAC_2_HOST_ACK_OFFSET 0x00000008
+#define IPC_XMAC_2_HOST_ACK_INDEX 0x00000002
+#define IPC_XMAC_2_HOST_ACK_RESET 0x00000000
+
+static inline void ipc_xmac_2_host_ack_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_XMAC_2_HOST_ACK_ADDR, value);
+}
+
+/*
+ * @brief XMAC_2_HOST_ENABLE_SET register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 xmac2host_enable_set 0x0
+ * </pre>
+ */
+#define IPC_XMAC_2_HOST_ENABLE_SET_ADDR (REG_IPC_BASE_ADDR + 0x0000000C)
+#define IPC_XMAC_2_HOST_ENABLE_SET_OFFSET 0x0000000C
+#define IPC_XMAC_2_HOST_ENABLE_SET_INDEX 0x00000003
+#define IPC_XMAC_2_HOST_ENABLE_SET_RESET 0x00000000
+
+static inline void ipc_xmac_2_host_enable_set_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_XMAC_2_HOST_ENABLE_SET_ADDR, value);
+}
+
+/*
+ * @brief XMAC_2_HOST_ENABLE_CLEAR register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 xmac2host_enable_clear 0x0
+ * </pre>
+ */
+#define IPC_XMAC_2_HOST_ENABLE_CLEAR_ADDR (REG_IPC_BASE_ADDR + 0x00000010)
+#define IPC_XMAC_2_HOST_ENABLE_CLEAR_OFFSET 0x00000010
+#define IPC_XMAC_2_HOST_ENABLE_CLEAR_INDEX 0x00000004
+#define IPC_XMAC_2_HOST_ENABLE_CLEAR_RESET 0x00000000
+
+static inline void ipc_xmac_2_host_enable_clear_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_XMAC_2_HOST_ENABLE_CLEAR_ADDR, value);
+}
+
+/*
+ * @brief XMAC_2_HOST_STATUS register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 xmac2host_status 0x0
+ * </pre>
+ */
+#define IPC_XMAC_2_HOST_STATUS_ADDR (REG_IPC_BASE_ADDR + 0x00000014)
+#define IPC_XMAC_2_HOST_STATUS_OFFSET 0x00000014
+#define IPC_XMAC_2_HOST_STATUS_INDEX 0x00000005
+#define IPC_XMAC_2_HOST_STATUS_RESET 0x00000000
+
+static inline u32 ipc_xmac_2_host_status_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, IPC_XMAC_2_HOST_STATUS_ADDR);
+}
+
+/*
+ * @brief HOST_GLOBAL_INT_EN register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 master_int_enable 0
+ * </pre>
+ */
+#define IPC_HOST_GLOBAL_INT_EN_ADDR (REG_IPC_BASE_ADDR + 0x00000030)
+#define IPC_HOST_GLOBAL_INT_EN_OFFSET 0x00000030
+#define IPC_HOST_GLOBAL_INT_EN_INDEX 0x0000000C
+#define IPC_HOST_GLOBAL_INT_EN_RESET 0x00000000
+
+static inline void ipc_host_global_int_en_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_HOST_GLOBAL_INT_EN_ADDR, value);
+}
+
+/*
+ * @brief HOST_2_LMAC_TRIGGER register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 host2lmac_trigger 0x0
+ * </pre>
+ */
+#define IPC_HOST_2_LMAC_TRIGGER_ADDR (REG_IPC_BASE_ADDR + 0x00000080)
+#define IPC_HOST_2_LMAC_TRIGGER_OFFSET 0x00000080
+#define IPC_HOST_2_LMAC_TRIGGER_INDEX 0x00000020
+#define IPC_HOST_2_LMAC_TRIGGER_RESET 0x00000000
+
+static inline void ipc_host_2_lmac_trigger_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_HOST_2_LMAC_TRIGGER_ADDR, value);
+}
+
+/*
+ * @brief HOST_2_UMAC_TRIGGER register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 host2umac_trigger 0x0
+ * </pre>
+ */
+#define IPC_HOST_2_UMAC_TRIGGER_ADDR (REG_IPC_BASE_ADDR + 0x00000084)
+#define IPC_HOST_2_UMAC_TRIGGER_OFFSET 0x00000084
+#define IPC_HOST_2_UMAC_TRIGGER_INDEX 0x00000021
+#define IPC_HOST_2_UMAC_TRIGGER_RESET 0x00000000
+
+static inline void ipc_host_2_umac_trigger_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_HOST_2_UMAC_TRIGGER_ADDR, value);
+}
+
+/*
+ * @brief HOST_2_SMAC_TRIGGER register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 host2smac_trigger 0x0
+ * </pre>
+ */
+#define IPC_HOST_2_SMAC_TRIGGER_ADDR (REG_IPC_BASE_ADDR + 0x00000088)
+#define IPC_HOST_2_SMAC_TRIGGER_OFFSET 0x00000088
+#define IPC_HOST_2_SMAC_TRIGGER_INDEX 0x00000022
+#define IPC_HOST_2_SMAC_TRIGGER_RESET 0x00000000
+
+static inline void ipc_host_2_smac_trigger_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IPC_HOST_2_SMAC_TRIGGER_ADDR, value);
+}
+
+#define REG_LCU_COMMON_BASE_ADDR 0x007CF000
+
+/*
+ * @brief LCU_COMMON_SW_RST register definition
+ * Software reset register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 SW_RST 0
+ * </pre>
+ */
+#define LCU_COMMON_SW_RST_ADDR (REG_LCU_COMMON_BASE_ADDR + 0x00000048)
+#define LCU_COMMON_SW_RST_OFFSET 0x00000048
+#define LCU_COMMON_SW_RST_INDEX 0x00000012
+#define LCU_COMMON_SW_RST_RESET 0x00000000
+
+static inline void lcu_common_sw_rst_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, LCU_COMMON_SW_RST_ADDR, value);
+}
+
+#define REG_LCU_PHY_BASE_ADDR 0x0048E000
+
+/*
+ * @brief LCU_CH_0_START register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 CH0_START 0
+ * </pre>
+ */
+#define LCU_PHY_LCU_CH_0_START_ADDR (REG_LCU_PHY_BASE_ADDR + 0x00000020)
+#define LCU_PHY_LCU_CH_0_START_OFFSET 0x00000020
+#define LCU_PHY_LCU_CH_0_START_INDEX 0x00000008
+#define LCU_PHY_LCU_CH_0_START_RESET 0x00000000
+
+/*
+ * @brief LCU_CH_0_STOP register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 CH0_STOP 0
+ * </pre>
+ */
+#define LCU_PHY_LCU_CH_0_STOP_ADDR (REG_LCU_PHY_BASE_ADDR + 0x00000070)
+#define LCU_PHY_LCU_CH_0_STOP_OFFSET 0x00000070
+#define LCU_PHY_LCU_CH_0_STOP_INDEX 0x0000001C
+#define LCU_PHY_LCU_CH_0_STOP_RESET 0x00000000
+
+static inline void lcu_phy_lcu_ch_0_stop_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, LCU_PHY_LCU_CH_0_STOP_ADDR, value);
+}
+
+/*
+ * @brief LCU_CH_1_STOP register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 CH1_STOP 0
+ * </pre>
+ */
+#define LCU_PHY_LCU_CH_1_STOP_ADDR (REG_LCU_PHY_BASE_ADDR + 0x00000074)
+#define LCU_PHY_LCU_CH_1_STOP_OFFSET 0x00000074
+#define LCU_PHY_LCU_CH_1_STOP_INDEX 0x0000001D
+#define LCU_PHY_LCU_CH_1_STOP_RESET 0x00000000
+
+static inline void lcu_phy_lcu_ch_1_stop_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, LCU_PHY_LCU_CH_1_STOP_ADDR, value);
+}
+
+/*
+ * @brief LCU_CH_0_STOP_EN register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 02 CH0_EXT_STOP_EN 0
+ * 01 CH0_FIC_STOP_EN 0
+ * 00 CH0_STOP_PTRN_EN 0
+ * </pre>
+ */
+#define LCU_PHY_LCU_CH_0_STOP_EN_ADDR (REG_LCU_PHY_BASE_ADDR + 0x00000078)
+#define LCU_PHY_LCU_CH_0_STOP_EN_OFFSET 0x00000078
+#define LCU_PHY_LCU_CH_0_STOP_EN_INDEX 0x0000001E
+#define LCU_PHY_LCU_CH_0_STOP_EN_RESET 0x00000000
+
+/*
+ * @brief LCU_SW_RST register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 SW_RST 0
+ * </pre>
+ */
+#define LCU_PHY_LCU_SW_RST_ADDR (REG_LCU_PHY_BASE_ADDR + 0x00000154)
+#define LCU_PHY_LCU_SW_RST_OFFSET 0x00000154
+#define LCU_PHY_LCU_SW_RST_INDEX 0x00000055
+#define LCU_PHY_LCU_SW_RST_RESET 0x00000000
+
+static inline void lcu_phy_lcu_sw_rst_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, LCU_PHY_LCU_SW_RST_ADDR, value);
+}
+
+/*
+ * @brief CONFIG_SPACE register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:26 ActiveAntennaSet 0x0
+ * 25:20 RxCckActiveChain 0x0
+ * 19:14 RxOfdmActiveChain 0x0
+ * 13:08 TxCckActiveChain 0x0
+ * 07:06 Band 0x0
+ * 05:04 ChannelBandwidth 0x0
+ * 03 OfdmOnly 0
+ * 02 RxSensingMode 0
+ * 01 UpdateSync 0
+ * 00 StartupSync 0
+ * </pre>
+ */
+#define MACDSP_API_CONFIG_SPACE_ADDR (REG_MACDSP_API_BASE_ADDR + 0x00000010)
+#define MACDSP_API_CONFIG_SPACE_OFFSET 0x00000010
+#define MACDSP_API_CONFIG_SPACE_INDEX 0x00000004
+#define MACDSP_API_CONFIG_SPACE_RESET 0x00000000
+
+static inline void macdsp_api_config_space_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MACDSP_API_CONFIG_SPACE_ADDR, value);
+}
+
+/*
+ * @brief STATE_CNTRL register definition
+ * This register controls the core's state transitions. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 07:04 NEXT_STATE 0x0
+ * 03:00 CURRENT_STATE 0x0
+ * </pre>
+ */
+#define MAC_HW_STATE_CNTRL_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000038)
+#define MAC_HW_STATE_CNTRL_OFFSET 0x00000038
+#define MAC_HW_STATE_CNTRL_INDEX 0x0000000E
+#define MAC_HW_STATE_CNTRL_RESET 0x00000000
+
+static inline void mac_hw_state_cntrl_next_state_setf(struct cl_hw *cl_hw, u8 nextstate)
+{
+ ASSERT_ERR((((u32)nextstate << 4) & ~((u32)0x000000F0)) == 0);
+ cl_reg_write(cl_hw, MAC_HW_STATE_CNTRL_ADDR,
+ (cl_reg_read(cl_hw, MAC_HW_STATE_CNTRL_ADDR) & ~((u32)0x000000F0)) |
+ ((u32)nextstate << 4));
+}
+
+/*
+ * @brief MAC_CNTRL_1 register definition
+ * Contains various settings for controlling the operation of the core. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 EOF_PAD_FOR_HE 1
+ * 30 EOF_PAD_FOR_VHT 0
+ * 29:28 IMPLICIT_BF_INT_CONF 0x0
+ * 27 DISABLE_BFR_RESP 0
+ * 26 RX_RIFS_EN 0
+ * 25 TSF_MGT_DISABLE 0
+ * 24 TSF_UPDATED_BY_SW 0
+ * 22 MAC_DETECT_UNDERRUN_EN 0
+ * 21 DISABLE_MU_CTS_RESP 0
+ * 20 BQRP_RESP_BY_FW 0
+ * 19 BSRP_RESP_BY_FW 0
+ * 18 ENABLE_NORMAL_ACK_RESP_IN_HE_MU_W_TRIG 0
+ * 17 DISABLE_NORMAL_ACK_RESP_IN_HE_MU_WO_TRIG 0
+ * 16:14 ABGN_MODE 0x3
+ * 13 KEY_STO_RAM_RESET 0
+ * 12 MIB_TABLE_RESET 0
+ * 11 RATE_CONTROLLER_MPIF 1
+ * 10 DISABLE_BA_RESP 0
+ * 09 DISABLE_CTS_RESP 0
+ * 08 DISABLE_ACK_RESP 0
+ * 07 ACTIVE_CLK_GATING 1
+ * 06 ENABLE_LP_CLK_SWITCH 0
+ * 05 FORCE_MSTA_BA 0
+ * 04 DISABLE_FAST_COMPARE 0
+ * 03 CFP_AWARE 0
+ * 02 PWR_MGT 0
+ * 01 AP 0
+ * 00 BSS_TYPE 1
+ * </pre>
+ */
+#define MAC_HW_MAC_CNTRL_1_ADDR (REG_MAC_HW_BASE_ADDR + 0x0000004C)
+#define MAC_HW_MAC_CNTRL_1_OFFSET 0x0000004C
+#define MAC_HW_MAC_CNTRL_1_INDEX 0x00000013
+#define MAC_HW_MAC_CNTRL_1_RESET 0x8000C881
+
+static inline void mac_hw_mac_cntrl_1_active_clk_gating_setf(struct cl_hw *cl_hw,
+ u8 activeclkgating)
+{
+ ASSERT_ERR((((u32)activeclkgating << 7) & ~((u32)0x00000080)) == 0);
+ cl_reg_write(cl_hw, MAC_HW_MAC_CNTRL_1_ADDR,
+ (cl_reg_read(cl_hw, MAC_HW_MAC_CNTRL_1_ADDR) & ~((u32)0x00000080)) |
+ ((u32)activeclkgating << 7));
+}
+
+/*
+ * @brief EDCA_CCA_BUSY register definition
+ * Indicates the CCA busy time. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 CCA_BUSY_DUR 0x0
+ * </pre>
+ */
+#define MAC_HW_EDCA_CCA_BUSY_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000220)
+#define MAC_HW_EDCA_CCA_BUSY_OFFSET 0x00000220
+#define MAC_HW_EDCA_CCA_BUSY_INDEX 0x00000088
+#define MAC_HW_EDCA_CCA_BUSY_RESET 0x00000000
+
+static inline u32 mac_hw_edca_cca_busy_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_EDCA_CCA_BUSY_ADDR);
+}
+
+/*
+ * @brief TX_MINE_BUSY register definition
+ * TX BUSY time by my TX frames register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 TX_MINE_TIME 0x0
+ * </pre>
+ */
+#define MAC_HW_TX_MINE_BUSY_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000238)
+#define MAC_HW_TX_MINE_BUSY_OFFSET 0x00000238
+#define MAC_HW_TX_MINE_BUSY_INDEX 0x0000008E
+#define MAC_HW_TX_MINE_BUSY_RESET 0x00000000
+
+static inline u32 mac_hw_tx_mine_busy_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_TX_MINE_BUSY_ADDR);
+}
+
+/*
+ * @brief ADD_CCA_BUSY_SEC_20 register definition
+ * Indicates the CCA on Secondary 20MHz busy time. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 CCA_BUSY_DUR_SEC_20 0x0
+ * </pre>
+ */
+#define MAC_HW_ADD_CCA_BUSY_SEC_20_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000290)
+#define MAC_HW_ADD_CCA_BUSY_SEC_20_OFFSET 0x00000290
+#define MAC_HW_ADD_CCA_BUSY_SEC_20_INDEX 0x000000A4
+#define MAC_HW_ADD_CCA_BUSY_SEC_20_RESET 0x00000000
+
+static inline u32 mac_hw_add_cca_busy_sec_20_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_ADD_CCA_BUSY_SEC_20_ADDR);
+}
+
+/*
+ * @brief ADD_CCA_BUSY_SEC_40 register definition
+ * Indicates the CCA on Secondary 40MHz busy time. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 CCA_BUSY_DUR_SEC_40 0x0
+ * </pre>
+ */
+#define MAC_HW_ADD_CCA_BUSY_SEC_40_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000294)
+#define MAC_HW_ADD_CCA_BUSY_SEC_40_OFFSET 0x00000294
+#define MAC_HW_ADD_CCA_BUSY_SEC_40_INDEX 0x000000A5
+#define MAC_HW_ADD_CCA_BUSY_SEC_40_RESET 0x00000000
+
+static inline u32 mac_hw_add_cca_busy_sec_40_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_ADD_CCA_BUSY_SEC_40_ADDR);
+}
+
+/*
+ * @brief ADD_CCA_BUSY_SEC_80 register definition
+ * Indicates the CCA on Secondary 80MHz busy time. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 CCA_BUSY_DUR_SEC_80 0x0
+ * </pre>
+ */
+#define MAC_HW_ADD_CCA_BUSY_SEC_80_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000298)
+#define MAC_HW_ADD_CCA_BUSY_SEC_80_OFFSET 0x00000298
+#define MAC_HW_ADD_CCA_BUSY_SEC_80_INDEX 0x000000A6
+#define MAC_HW_ADD_CCA_BUSY_SEC_80_RESET 0x00000000
+
+static inline u32 mac_hw_add_cca_busy_sec_80_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_ADD_CCA_BUSY_SEC_80_ADDR);
+}
+
+/*
+ * @brief INTRA_BSS_NAV_BUSY register definition
+ * Count intra BSS NAV busy period register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 INTRA_BSS_NAV_BUSY_DUR 0x0
+ * </pre>
+ */
+#define MAC_HW_INTRA_BSS_NAV_BUSY_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000408)
+#define MAC_HW_INTRA_BSS_NAV_BUSY_OFFSET 0x00000408
+#define MAC_HW_INTRA_BSS_NAV_BUSY_INDEX 0x00000102
+#define MAC_HW_INTRA_BSS_NAV_BUSY_RESET 0x00000000
+
+static inline u32 mac_hw_intra_bss_nav_busy_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_INTRA_BSS_NAV_BUSY_ADDR);
+}
+
+/*
+ * @brief INTER_BSS_NAV_BUSY register definition
+ * Count inter BSS NAV busy period register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 INTER_BSS_NAV_BUSY_DUR 0x0
+ * </pre>
+ */
+#define MAC_HW_INTER_BSS_NAV_BUSY_ADDR (REG_MAC_HW_BASE_ADDR + 0x0000040C)
+#define MAC_HW_INTER_BSS_NAV_BUSY_OFFSET 0x0000040C
+#define MAC_HW_INTER_BSS_NAV_BUSY_INDEX 0x00000103
+#define MAC_HW_INTER_BSS_NAV_BUSY_RESET 0x00000000
+
+static inline u32 mac_hw_inter_bss_nav_busy_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_INTER_BSS_NAV_BUSY_ADDR);
+}
+
+/*
+ * @brief DEBUG_PORT_SEL_A register definition
+ * Used to multiplex different sets of signals on the debug pins. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 15:08 DEBUG_PORT_SEL_1 0x0
+ * 07:00 DEBUG_PORT_SEL_0 0x0
+ * </pre>
+ */
+#define MAC_HW_DEBUG_PORT_SEL_A_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000510)
+#define MAC_HW_DEBUG_PORT_SEL_A_OFFSET 0x00000510
+#define MAC_HW_DEBUG_PORT_SEL_A_INDEX 0x00000144
+#define MAC_HW_DEBUG_PORT_SEL_A_RESET 0x00000000
+
+static inline u32 mac_hw_debug_port_sel_a_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_DEBUG_PORT_SEL_A_ADDR);
+}
+
+static inline void mac_hw_debug_port_sel_a_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MAC_HW_DEBUG_PORT_SEL_A_ADDR, value);
+}
+
+/*
+ * @brief DEBUG_PORT_SEL_B register definition
+ * Used to multiplex different sets of signals on the register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 15:08 DEBUG_PORT_SEL_3 0x0
+ * 07:00 DEBUG_PORT_SEL_2 0x0
+ * </pre>
+ */
+#define MAC_HW_DEBUG_PORT_SEL_B_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000530)
+#define MAC_HW_DEBUG_PORT_SEL_B_OFFSET 0x00000530
+#define MAC_HW_DEBUG_PORT_SEL_B_INDEX 0x0000014C
+#define MAC_HW_DEBUG_PORT_SEL_B_RESET 0x00000000
+
+static inline u32 mac_hw_debug_port_sel_b_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_DEBUG_PORT_SEL_B_ADDR);
+}
+
+static inline void mac_hw_debug_port_sel_b_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MAC_HW_DEBUG_PORT_SEL_B_ADDR, value);
+}
+
+/*
+ * @brief DEBUG_PORT_SEL_C register definition
+ * Used to multiplex different sets of signals on the register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 15:08 DEBUG_PORT_SEL_5 0x0
+ * 07:00 DEBUG_PORT_SEL_4 0x0
+ * </pre>
+ */
+#define MAC_HW_DEBUG_PORT_SEL_C_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000534)
+#define MAC_HW_DEBUG_PORT_SEL_C_OFFSET 0x00000534
+#define MAC_HW_DEBUG_PORT_SEL_C_INDEX 0x0000014D
+#define MAC_HW_DEBUG_PORT_SEL_C_RESET 0x00000000
+
+static inline u32 mac_hw_debug_port_sel_c_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_DEBUG_PORT_SEL_C_ADDR);
+}
+
+static inline void mac_hw_debug_port_sel_c_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MAC_HW_DEBUG_PORT_SEL_C_ADDR, value);
+}
+
+/*
+ * @brief DEBUG_PORT_EN register definition
+ * Used to determine which debug ports are enabled register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 05 EN5 0
+ * 04 EN4 0
+ * 03 EN3 0
+ * 02 EN2 0
+ * 01 EN1 0
+ * 00 EN0 0
+ * </pre>
+ */
+#define MAC_HW_DEBUG_PORT_EN_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000538)
+#define MAC_HW_DEBUG_PORT_EN_OFFSET 0x00000538
+#define MAC_HW_DEBUG_PORT_EN_INDEX 0x0000014E
+#define MAC_HW_DEBUG_PORT_EN_RESET 0x00000000
+
+static inline u32 mac_hw_debug_port_en_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, MAC_HW_DEBUG_PORT_EN_ADDR);
+}
+
+static inline void mac_hw_debug_port_en_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MAC_HW_DEBUG_PORT_EN_ADDR, value);
+}
+
+/*
+ * @brief DOZE_CNTRL_2 register definition
+ * Contains settings for controlling DOZE state. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 WAKE_UP_FROM_DOZE 0
+ * 00 WAKE_UP_SW 1
+ * </pre>
+ */
+#define MAC_HW_DOZE_CNTRL_2_ADDR (REG_MAC_HW_BASE_ADDR + 0x00008048)
+#define MAC_HW_DOZE_CNTRL_2_OFFSET 0x00008048
+#define MAC_HW_DOZE_CNTRL_2_INDEX 0x00002012
+#define MAC_HW_DOZE_CNTRL_2_RESET 0x00000001
+
+static inline void mac_hw_doze_cntrl_2_wake_up_sw_setf(struct cl_hw *cl_hw, u8 wakeupsw)
+{
+ ASSERT_ERR((((u32)wakeupsw << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write(cl_hw, MAC_HW_DOZE_CNTRL_2_ADDR,
+ (cl_reg_read(cl_hw, MAC_HW_DOZE_CNTRL_2_ADDR) & ~((u32)0x00000001)) |
+ ((u32)wakeupsw << 0));
+}
+
+#define MU_ADDR_OFFSET(i) ((i) << 16)
+#define MAX_MU_CNT 8
+
+/*
+ * @brief MAC_CNTRL_2 register definition
+ * Contains various settings for controlling the operation of the core. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 00 SOFT_RESET 0
+ * </pre>
+ */
+#define MAC_HW_MU_MAC_CNTRL_2_ADDR (REG_MAC_HW_BASE_ADDR + 0x00008050)
+#define MAC_HW_MU_MAC_CNTRL_2_OFFSET 0x00008050
+#define MAC_HW_MU_MAC_CNTRL_2_INDEX 0x00002014
+#define MAC_HW_MU_MAC_CNTRL_2_RESET 0x00000000
+
+static inline void mac_hw_mu_mac_cntrl_2_set(struct cl_hw *cl_hw, u32 value, u8 mu_idx)
+{
+ ASSERT_ERR(mu_idx < MAX_MU_CNT);
+ cl_reg_write(cl_hw, (MAC_HW_MU_MAC_CNTRL_2_ADDR + MU_ADDR_OFFSET(mu_idx)), value);
+}
+
+/*
+ * @brief MIB_TABLE register definition
+ * MIB table register description
+ * 1024 memory size
+ * </pre>
+ */
+#define MAC_HW_MU_MIB_TABLE_ADDR (REG_MAC_HW_BASE_ADDR + 0x00000800)
+#define MAC_HW_MU_MIB_TABLE_OFFSET 0x00000800
+#define MAC_HW_MU_MIB_TABLE_SIZE 0x00000400
+#define MAC_HW_MU_MIB_TABLE_END_ADDR (MAC_HW_MU_MIB_TABLE_ADDR + MAC_HW_MU_MIB_TABLE_SIZE - 1)
+
+#define REG_MACSYS_GCU_BASE_ADDR 0x007C5000
+
+/*
+ * @brief CHIP_VERSION register definition
+ * Chip Version 8000B0 register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:08 PRODUCT_ID 0x8000
+ * 07:04 STEP_ID 0xB
+ * 03:00 REV_ID 0x0
+ * </pre>
+ */
+#define MACSYS_GCU_CHIP_VERSION_ADDR (REG_MACSYS_GCU_BASE_ADDR + 0x00000050)
+#define MACSYS_GCU_CHIP_VERSION_OFFSET 0x00000050
+#define MACSYS_GCU_CHIP_VERSION_INDEX 0x00000014
+#define MACSYS_GCU_CHIP_VERSION_RESET 0x008000B0
+
+static inline u8 macsys_gcu_chip_version_step_id_getf(struct cl_chip *chip)
+{
+ u32 local_val = cl_reg_read_chip(chip, MACSYS_GCU_CHIP_VERSION_ADDR);
+
+ return ((local_val & ((u32)0x000000F0)) >> 4);
+}
+
+/*
+ * @brief XT_CONTROL register definition
+ * Tensilica control register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 21 smac_debug_en 0
+ * 20 smac_break_in 0
+ * 19 smac_ocd_halt_on_reset 1
+ * 18 smac_run_stall 0
+ * 17 smac_dreset_n 1
+ * 16 smac_breset_n 0
+ * 13 umac_debug_en 0
+ * 12 umac_break_in 0
+ * 11 umac_ocd_halt_on_reset 1
+ * 10 umac_run_stall 0
+ * 09 umac_dreset_n 1
+ * 08 umac_breset_n 0
+ * 05 lmac_debug_en 0
+ * 04 lmac_break_in 0
+ * 03 lmac_ocd_halt_on_reset 1
+ * 02 lmac_run_stall 0
+ * 01 lmac_dreset_n 1
+ * 00 lmac_breset_n 0
+ * </pre>
+ */
+#define MACSYS_GCU_XT_CONTROL_ADDR (REG_MACSYS_GCU_BASE_ADDR + 0x000000F0)
+#define MACSYS_GCU_XT_CONTROL_OFFSET 0x000000F0
+#define MACSYS_GCU_XT_CONTROL_INDEX 0x0000003C
+#define MACSYS_GCU_XT_CONTROL_RESET 0x000A0A0A
+
+static inline u32 macsys_gcu_xt_control_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, MACSYS_GCU_XT_CONTROL_ADDR);
+}
+
+static inline void macsys_gcu_xt_control_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, MACSYS_GCU_XT_CONTROL_ADDR, value);
+}
+
+#define REG_MODEM_GCU_BASE_ADDR 0x00480000
+
+/*
+ * @brief MPU register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 21 MPU_CLK_F 0
+ * 20 MPU_REG_CLK_F 0
+ * 13 MPU_CLK_EN 0
+ * 12 MPU_REG_CLK_EN 0
+ * 01 MPU_RST_N 0
+ * 00 MPU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_MPU_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000004)
+#define MODEM_GCU_MPU_OFFSET 0x00000004
+#define MODEM_GCU_MPU_INDEX 0x00000001
+#define MODEM_GCU_MPU_RESET 0x00000000
+
+static inline void modem_gcu_mpu_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_MPU_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_MPU_CLK_F_BIT ((u32)0x00200000)
+#define MODEM_GCU_MPU_CLK_F_POS 21
+#define MODEM_GCU_MPU_REG_CLK_F_BIT ((u32)0x00100000)
+#define MODEM_GCU_MPU_REG_CLK_F_POS 20
+#define MODEM_GCU_MPU_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_MPU_CLK_EN_POS 13
+#define MODEM_GCU_MPU_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_MPU_REG_CLK_EN_POS 12
+#define MODEM_GCU_MPU_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_MPU_RST_N_POS 1
+#define MODEM_GCU_MPU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_MPU_REG_RST_N_POS 0
+
+/*
+ * @brief BPU register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 24 BPUL_RX_CLK_F 0
+ * 23 BPU_CLK_F 0
+ * 22 BPU_RX_CLK_F 0
+ * 21 BPU_TX_CLK_F 0
+ * 20 BPU_REG_CLK_F 0
+ * 16 BPUL_RX_CLK_EN 0
+ * 15 BPU_CLK_EN 0
+ * 14 BPU_RX_CLK_EN 0
+ * 13 BPU_TX_CLK_EN 0
+ * 12 BPU_REG_CLK_EN 0
+ * 03 BPU_RST_N 0
+ * 02 BPU_RX_RST_N 0
+ * 01 BPU_TX_RST_N 0
+ * 00 BPU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_BPU_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000008)
+#define MODEM_GCU_BPU_OFFSET 0x00000008
+#define MODEM_GCU_BPU_INDEX 0x00000002
+#define MODEM_GCU_BPU_RESET 0x00000000
+
+static inline void modem_gcu_bpu_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_BPU_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_BPU_BPUL_RX_CLK_F_BIT ((u32)0x01000000)
+#define MODEM_GCU_BPU_BPUL_RX_CLK_F_POS 24
+#define MODEM_GCU_BPU_CLK_F_BIT ((u32)0x00800000)
+#define MODEM_GCU_BPU_CLK_F_POS 23
+#define MODEM_GCU_BPU_RX_CLK_F_BIT ((u32)0x00400000)
+#define MODEM_GCU_BPU_RX_CLK_F_POS 22
+#define MODEM_GCU_BPU_TX_CLK_F_BIT ((u32)0x00200000)
+#define MODEM_GCU_BPU_TX_CLK_F_POS 21
+#define MODEM_GCU_BPU_REG_CLK_F_BIT ((u32)0x00100000)
+#define MODEM_GCU_BPU_REG_CLK_F_POS 20
+#define MODEM_GCU_BPU_BPUL_RX_CLK_EN_BIT ((u32)0x00010000)
+#define MODEM_GCU_BPU_BPUL_RX_CLK_EN_POS 16
+#define MODEM_GCU_BPU_CLK_EN_BIT ((u32)0x00008000)
+#define MODEM_GCU_BPU_CLK_EN_POS 15
+#define MODEM_GCU_BPU_RX_CLK_EN_BIT ((u32)0x00004000)
+#define MODEM_GCU_BPU_RX_CLK_EN_POS 14
+#define MODEM_GCU_BPU_TX_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_BPU_TX_CLK_EN_POS 13
+#define MODEM_GCU_BPU_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_BPU_REG_CLK_EN_POS 12
+#define MODEM_GCU_BPU_RST_N_BIT ((u32)0x00000008)
+#define MODEM_GCU_BPU_RST_N_POS 3
+#define MODEM_GCU_BPU_RX_RST_N_BIT ((u32)0x00000004)
+#define MODEM_GCU_BPU_RX_RST_N_POS 2
+#define MODEM_GCU_BPU_TX_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_BPU_TX_RST_N_POS 1
+#define MODEM_GCU_BPU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_BPU_REG_RST_N_POS 0
+
+/*
+ * @brief TFU register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 21 TFU_CLK_F 0
+ * 20 TFU_REG_CLK_F 0
+ * 13 TFU_CLK_EN 0
+ * 12 TFU_REG_CLK_EN 0
+ * 01 TFU_RST_N 0
+ * 00 TFU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_TFU_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x0000000C)
+#define MODEM_GCU_TFU_OFFSET 0x0000000C
+#define MODEM_GCU_TFU_INDEX 0x00000003
+#define MODEM_GCU_TFU_RESET 0x00000000
+
+static inline void modem_gcu_tfu_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_TFU_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_TFU_CLK_F_BIT ((u32)0x00200000)
+#define MODEM_GCU_TFU_CLK_F_POS 21
+#define MODEM_GCU_TFU_REG_CLK_F_BIT ((u32)0x00100000)
+#define MODEM_GCU_TFU_REG_CLK_F_POS 20
+#define MODEM_GCU_TFU_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_TFU_CLK_EN_POS 13
+#define MODEM_GCU_TFU_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_TFU_REG_CLK_EN_POS 12
+#define MODEM_GCU_TFU_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_TFU_RST_N_POS 1
+#define MODEM_GCU_TFU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_TFU_REG_RST_N_POS 0
+
+/*
+ * @brief SMU register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 21 SMU_CLK_F 0
+ * 20 SMU_REG_CLK_F 0
+ * 13 SMU_CLK_EN 0
+ * 12 SMU_REG_CLK_EN 0
+ * 01 SMU_RST_N 0
+ * 00 SMU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_SMU_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000010)
+#define MODEM_GCU_SMU_OFFSET 0x00000010
+#define MODEM_GCU_SMU_INDEX 0x00000004
+#define MODEM_GCU_SMU_RESET 0x00000000
+
+static inline void modem_gcu_smu_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_SMU_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_SMU_CLK_F_BIT ((u32)0x00200000)
+#define MODEM_GCU_SMU_CLK_F_POS 21
+#define MODEM_GCU_SMU_REG_CLK_F_BIT ((u32)0x00100000)
+#define MODEM_GCU_SMU_REG_CLK_F_POS 20
+#define MODEM_GCU_SMU_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_SMU_CLK_EN_POS 13
+#define MODEM_GCU_SMU_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_SMU_REG_CLK_EN_POS 12
+#define MODEM_GCU_SMU_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_SMU_RST_N_POS 1
+#define MODEM_GCU_SMU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_SMU_REG_RST_N_POS 0
+
+/*
+ * @brief MUX_FIC register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 20 MUX_FIC_CLK_F 0
+ * 12 MUX_FIC_CLK_EN 0
+ * 01 FIC_MUX_SOFT_RST_N 1
+ * 00 MUX_FIC_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_MUX_FIC_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000014)
+#define MODEM_GCU_MUX_FIC_OFFSET 0x00000014
+#define MODEM_GCU_MUX_FIC_INDEX 0x00000005
+#define MODEM_GCU_MUX_FIC_RESET 0x00000002
+
+static inline void modem_gcu_mux_fic_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_MUX_FIC_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_MUX_FIC_CLK_F_BIT ((u32)0x00100000)
+#define MODEM_GCU_MUX_FIC_CLK_F_POS 20
+#define MODEM_GCU_MUX_FIC_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_MUX_FIC_CLK_EN_POS 12
+#define MODEM_GCU_MUX_FIC_SOFT_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_MUX_FIC_SOFT_RST_N_POS 1
+#define MODEM_GCU_MUX_FIC_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_MUX_FIC_RST_N_POS 0
+
+/*
+ * @brief MUX_FIC_CONFIG register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 FIC_ISOLATED 0
+ * 17 FIC_ISOLATE 0
+ * 16 DISABLE_FIC_MESS 0
+ * 15:08 MUX_FIC_CONFLICT_DELAY_WRITE 0x0
+ * 07:00 MUX_FIC_CONFLICT_DELAY_READ 0x0
+ * </pre>
+ */
+#define MODEM_GCU_MUX_FIC_CONFIG_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x0000001C)
+#define MODEM_GCU_MUX_FIC_CONFIG_OFFSET 0x0000001C
+#define MODEM_GCU_MUX_FIC_CONFIG_INDEX 0x00000007
+#define MODEM_GCU_MUX_FIC_CONFIG_RESET 0x00000000
+
+static inline void modem_gcu_mux_fic_config_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_MUX_FIC_CONFIG_ADDR, value);
+}
+
+static inline u8 modem_gcu_mux_fic_config_fic_isolated_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, MODEM_GCU_MUX_FIC_CONFIG_ADDR);
+
+ return ((local_val & ((u32)0x80000000)) >> 31);
+}
+
+static inline void modem_gcu_mux_fic_config_fic_isolate_setf(struct cl_hw *cl_hw, u8 ficisolate)
+{
+ ASSERT_ERR((((u32)ficisolate << 17) & ~((u32)0x00020000)) == 0);
+ cl_reg_write(cl_hw, MODEM_GCU_MUX_FIC_CONFIG_ADDR,
+ (cl_reg_read(cl_hw, MODEM_GCU_MUX_FIC_CONFIG_ADDR) & ~((u32)0x00020000)) |
+ ((u32)ficisolate << 17));
+}
+
+/*
+ * @brief RIU_RST register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 07 RIUFE_RST_N 0
+ * 06 RIUAGC_RST_N 0
+ * 05 RIU_MDM_B_RST_N 0
+ * 04 RIULB_RST_N 0
+ * 03 RIURC_RST_N 0
+ * 02 RIU_RADAR_RST_N 0
+ * 01 RIU_RST_N 0
+ * 00 RIU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_RIU_RST_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000020)
+#define MODEM_GCU_RIU_RST_OFFSET 0x00000020
+#define MODEM_GCU_RIU_RST_INDEX 0x00000008
+#define MODEM_GCU_RIU_RST_RESET 0x00000000
+
+static inline void modem_gcu_riu_rst_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_RIU_RST_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_RIU_FE_RST_N_BIT ((u32)0x00000080)
+#define MODEM_GCU_RIU_FE_RST_N_POS 7
+#define MODEM_GCU_RIU_AGC_RST_N_BIT ((u32)0x00000040)
+#define MODEM_GCU_RIU_AGC_RST_N_POS 6
+#define MODEM_GCU_RIU_MDM_B_RST_N_BIT ((u32)0x00000020)
+#define MODEM_GCU_RIU_MDM_B_RST_N_POS 5
+#define MODEM_GCU_RIU_LB_RST_N_BIT ((u32)0x00000010)
+#define MODEM_GCU_RIU_LB_RST_N_POS 4
+#define MODEM_GCU_RIU_RC_RST_N_BIT ((u32)0x00000008)
+#define MODEM_GCU_RIU_RC_RST_N_POS 3
+#define MODEM_GCU_RIU_RADAR_RST_N_BIT ((u32)0x00000004)
+#define MODEM_GCU_RIU_RADAR_RST_N_POS 2
+#define MODEM_GCU_RIU_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_RIU_RST_N_POS 1
+#define MODEM_GCU_RIU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_RIU_REG_RST_N_POS 0
+
+/*
+ * @brief RIU_CLK register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 RIUADC_PWR_CLK_F 0
+ * 30 RIUFEA_5_CLK_F 0
+ * 29 RIUFEA_4_CLK_F 0
+ * 28 RIUFEA_3_CLK_F 0
+ * 27 RIUFEA_2_CLK_F 0
+ * 26 RIUFEA_1_CLK_F 0
+ * 25 RIUFEA_0_CLK_F 0
+ * 24 RIU_MDM_B_TX_CLK_F 0
+ * 23 RIU_MDM_B_RX_CLK_F 0
+ * 22 RIU_MDM_B_CLK_F 0
+ * 21 RIULB_CLK_F 0
+ * 20 RIURC_CLK_F 0
+ * 19 RIU_RADAR_CLK_F 0
+ * 18 RIUAGC_CLK_F 0
+ * 17 RIU_CLK_F 0
+ * 16 RIU_REG_CLK_F 0
+ * 15 RIUADC_PWR_CLK_EN 0
+ * 14 RIUFEA_5_CLK_EN 0
+ * 13 RIUFEA_4_CLK_EN 0
+ * 12 RIUFEA_3_CLK_EN 0
+ * 11 RIUFEA_2_CLK_EN 0
+ * 10 RIUFEA_1_CLK_EN 0
+ * 09 RIUFEA_0_CLK_EN 0
+ * 08 RIU_MDM_B_TX_CLK_EN 0
+ * 07 RIU_MDM_B_RX_CLK_EN 0
+ * 06 RIU_MDM_B_CLK_EN 0
+ * 05 RIULB_CLK_EN 0
+ * 04 RIURCR_CLK_EN 0
+ * 03 RIU_RADAR_CLK_EN 0
+ * 02 RIUAGC_CLK_EN 0
+ * 01 RIU_CLK_EN 0
+ * 00 RIU_REG_CLK_EN 0
+ * </pre>
+ */
+#define MODEM_GCU_RIU_CLK_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000024)
+#define MODEM_GCU_RIU_CLK_OFFSET 0x00000024
+#define MODEM_GCU_RIU_CLK_INDEX 0x00000009
+#define MODEM_GCU_RIU_CLK_RESET 0x00000000
+
+static inline void modem_gcu_riu_clk_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_RIU_CLK_ADDR, value);
+}
+
+/*
+ * @brief SPU register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 21 SPU_CLK_F 0
+ * 20 SPU_REG_CLK_F 0
+ * 13 SPU_CLK_EN 0
+ * 12 SPU_REG_CLK_EN 0
+ * 01 SPU_RST_N 0
+ * 00 SPU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_SPU_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000030)
+#define MODEM_GCU_SPU_OFFSET 0x00000030
+#define MODEM_GCU_SPU_INDEX 0x0000000C
+#define MODEM_GCU_SPU_RESET 0x00000000
+
+static inline void modem_gcu_spu_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_SPU_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_SPU_CLK_F_BIT ((u32)0x00200000)
+#define MODEM_GCU_SPU_CLK_F_POS 21
+#define MODEM_GCU_SPU_REG_CLK_F_BIT ((u32)0x00100000)
+#define MODEM_GCU_SPU_REG_CLK_F_POS 20
+#define MODEM_GCU_SPU_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_SPU_CLK_EN_POS 13
+#define MODEM_GCU_SPU_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_SPU_REG_CLK_EN_POS 12
+#define MODEM_GCU_SPU_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_SPU_RST_N_POS 1
+#define MODEM_GCU_SPU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_SPU_REG_RST_N_POS 0
+
+/*
+ * @brief LCU register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 14 LCU_HLF_CLK_EN 0
+ * 13 LCU_CLK_EN 0
+ * 12 LCU_REG_CLK_EN 0
+ * 02 LCU_HLF_RST_N 0
+ * 01 LCU_RST_N 0
+ * 00 LCU_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_LCU_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000034)
+#define MODEM_GCU_LCU_OFFSET 0x00000034
+#define MODEM_GCU_LCU_INDEX 0x0000000D
+#define MODEM_GCU_LCU_RESET 0x00000000
+
+static inline void modem_gcu_lcu_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_LCU_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_LCU_HLF_CLK_EN_BIT ((u32)0x00004000)
+#define MODEM_GCU_LCU_HLF_CLK_EN_POS 14
+#define MODEM_GCU_LCU_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_LCU_CLK_EN_POS 13
+#define MODEM_GCU_LCU_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_LCU_REG_CLK_EN_POS 12
+#define MODEM_GCU_LCU_HLF_RST_N_BIT ((u32)0x00000004)
+#define MODEM_GCU_LCU_HLF_RST_N_POS 2
+#define MODEM_GCU_LCU_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_LCU_RST_N_POS 1
+#define MODEM_GCU_LCU_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_LCU_REG_RST_N_POS 0
+
+/*
+ * @brief EPA register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 EPA_CLK_EN 0
+ * 12 EPA_REG_CLK_EN 0
+ * 01 EPA_RST_N 0
+ * 00 EPA_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_EPA_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000038)
+#define MODEM_GCU_EPA_OFFSET 0x00000038
+#define MODEM_GCU_EPA_INDEX 0x0000000E
+#define MODEM_GCU_EPA_RESET 0x00000000
+
+static inline void modem_gcu_epa_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_EPA_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_EPA_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_EPA_CLK_EN_POS 13
+#define MODEM_GCU_EPA_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_EPA_REG_CLK_EN_POS 12
+#define MODEM_GCU_EPA_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_EPA_RST_N_POS 1
+#define MODEM_GCU_EPA_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_EPA_REG_RST_N_POS 0
+
+/*
+ * @brief BF register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 BF_CLK_EN 0
+ * 12 BF_REG_CLK_EN 0
+ * 01 BF_RST_N 0
+ * 00 BF_REG_RST_N 0
+ * </pre>
+ */
+#define MODEM_GCU_BF_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x0000003C)
+#define MODEM_GCU_BF_OFFSET 0x0000003C
+#define MODEM_GCU_BF_INDEX 0x0000000F
+#define MODEM_GCU_BF_RESET 0x00000000
+
+static inline void modem_gcu_bf_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_BF_ADDR, value);
+}
+
+/* Field definitions */
+#define MODEM_GCU_BF_CLK_EN_BIT ((u32)0x00002000)
+#define MODEM_GCU_BF_CLK_EN_POS 13
+#define MODEM_GCU_BF_REG_CLK_EN_BIT ((u32)0x00001000)
+#define MODEM_GCU_BF_REG_CLK_EN_POS 12
+#define MODEM_GCU_BF_RST_N_BIT ((u32)0x00000002)
+#define MODEM_GCU_BF_RST_N_POS 1
+#define MODEM_GCU_BF_REG_RST_N_BIT ((u32)0x00000001)
+#define MODEM_GCU_BF_REG_RST_N_POS 0
+
+/*
+ * @brief RIU_CLK_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 17 RIUFE_EXT_CLK_F 0
+ * 16 RIUFRC_CLK_F 0
+ * 01 RIUFE_EXT_CLK_EN 0
+ * 00 RIUFRC_CLK_EN 0
+ * </pre>
+ */
+#define MODEM_GCU_RIU_CLK_1_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00000124)
+#define MODEM_GCU_RIU_CLK_1_OFFSET 0x00000124
+#define MODEM_GCU_RIU_CLK_1_INDEX 0x00000049
+#define MODEM_GCU_RIU_CLK_1_RESET 0x00000000
+
+static inline void modem_gcu_riu_clk_1_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_RIU_CLK_1_ADDR, value);
+}
+
+/*
+ * @brief CEVA_CTRL register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 MCCI_ADDR_BASE 0
+ * 14 VINTC 0
+ * 12 NMI 0
+ * 10:09 EXT_VOM 0x0
+ * 08 EXT_PV 0
+ * 07:06 UIA 0x0
+ * 05 STOP_SD 0
+ * 04 MON_STAT 0
+ * 02 EXTERNAL_WAIT 1
+ * 00 BOOT 0
+ * </pre>
+ */
+#define MODEM_GCU_CEVA_CTRL_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00001004)
+#define MODEM_GCU_CEVA_CTRL_OFFSET 0x00001004
+#define MODEM_GCU_CEVA_CTRL_INDEX 0x00000401
+#define MODEM_GCU_CEVA_CTRL_RESET 0x00000004
+
+static inline void modem_gcu_ceva_ctrl_external_wait_setf(struct cl_hw *cl_hw, u8 externalwait)
+{
+ ASSERT_ERR((((u32)externalwait << 2) & ~((u32)0x00000004)) == 0);
+ cl_reg_write(cl_hw, MODEM_GCU_CEVA_CTRL_ADDR,
+ (cl_reg_read(cl_hw, MODEM_GCU_CEVA_CTRL_ADDR) & ~((u32)0x00000004)) |
+ ((u32)externalwait << 2));
+}
+
+static inline void modem_gcu_ceva_ctrl_boot_setf(struct cl_hw *cl_hw, u8 boot)
+{
+ ASSERT_ERR((((u32)boot << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write(cl_hw, MODEM_GCU_CEVA_CTRL_ADDR,
+ (cl_reg_read(cl_hw, MODEM_GCU_CEVA_CTRL_ADDR) & ~((u32)0x00000001)) |
+ ((u32)boot << 0));
+}
+
+/*
+ * @brief CEVA_VEC register definition
+ * Ceva Vector register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 VECTOR 0x0
+ * </pre>
+ */
+#define MODEM_GCU_CEVA_VEC_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00001008)
+#define MODEM_GCU_CEVA_VEC_OFFSET 0x00001008
+#define MODEM_GCU_CEVA_VEC_INDEX 0x00000402
+#define MODEM_GCU_CEVA_VEC_RESET 0x00000000
+
+static inline void modem_gcu_ceva_vec_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_CEVA_VEC_ADDR, value);
+}
+
+/*
+ * @brief RIU_CLK_BW register definition
+ * RIU clocks BW. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 agc_clk_bw 0
+ * 12:10 lb_mem_clk_bw 0x2
+ * 09:08 agc_mem_clk_bw 0x1
+ * 07:06 riu_afe_clk_bw 0x2
+ * 05:04 phyfesync_bw 0x2
+ * 03:02 adcpowclk_bw 0x2
+ * 01:00 riulbgclk_bw 0x2
+ * </pre>
+ */
+#define MODEM_GCU_RIU_CLK_BW_ADDR (REG_MODEM_GCU_BASE_ADDR + 0x00001240)
+#define MODEM_GCU_RIU_CLK_BW_OFFSET 0x00001240
+#define MODEM_GCU_RIU_CLK_BW_INDEX 0x00000490
+#define MODEM_GCU_RIU_CLK_BW_RESET 0x000009AA
+
+static inline void modem_gcu_riu_clk_bw_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, MODEM_GCU_RIU_CLK_BW_ADDR, value);
+}
+
+static inline void modem_gcu_riu_clk_bw_agc_mem_clk_bw_setf(struct cl_hw *cl_hw, u8 agcmemclkbw)
+{
+ ASSERT_ERR((((u32)agcmemclkbw << 8) & ~((u32)0x00000300)) == 0);
+ cl_reg_write(cl_hw, MODEM_GCU_RIU_CLK_BW_ADDR,
+ (cl_reg_read(cl_hw, MODEM_GCU_RIU_CLK_BW_ADDR) & ~((u32)0x00000300)) |
+ ((u32)agcmemclkbw << 8));
+}
+
+/*
+ * @brief STATIC_CONF_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30 ARB_ONESHOT_BYPASS 1
+ * 28 BTC_SEL 0
+ * 27 CLK_SAVE_MODE 0
+ * 26 RF_RST_N_DEFAULT 0
+ * 25 RF_RST_N_REQ 0
+ * 24 FORCE_RSSI_ON 0
+ * 23:20 RSSI_M 0x2
+ * 19:16 RSSI_N 0x6
+ * 03:00 CDB_MODE_MAJ 0x0
+ * </pre>
+ */
+#define RICU_STATIC_CONF_0_ADDR (REG_RICU_BASE_ADDR + 0x00000004)
+#define RICU_STATIC_CONF_0_OFFSET 0x00000004
+#define RICU_STATIC_CONF_0_INDEX 0x00000001
+#define RICU_STATIC_CONF_0_RESET 0x40260000
+
+static inline void ricu_static_conf_0_btc_sel_setf(struct cl_chip *chip, u8 btcsel)
+{
+ ASSERT_ERR_CHIP((((u32)btcsel << 28) & ~((u32)0x10000000)) == 0);
+ cl_reg_write_chip(chip, RICU_STATIC_CONF_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_STATIC_CONF_0_ADDR) & ~((u32)0x10000000)) |
+ ((u32)btcsel << 28));
+}
+
+static inline void ricu_static_conf_0_clk_save_mode_setf(struct cl_chip *chip, u8 clksavemode)
+{
+ ASSERT_ERR_CHIP((((u32)clksavemode << 27) & ~((u32)0x08000000)) == 0);
+ cl_reg_write_chip(chip, RICU_STATIC_CONF_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_STATIC_CONF_0_ADDR) & ~((u32)0x08000000)) |
+ ((u32)clksavemode << 27));
+}
+
+static inline void ricu_static_conf_0_rf_rst_n_req_setf(struct cl_chip *chip, u8 rfrstnreq)
+{
+ ASSERT_ERR_CHIP((((u32)rfrstnreq << 25) & ~((u32)0x02000000)) == 0);
+ cl_reg_write_chip(chip, RICU_STATIC_CONF_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_STATIC_CONF_0_ADDR) & ~((u32)0x02000000)) |
+ ((u32)rfrstnreq << 25));
+}
+
+static inline u8 ricu_static_conf_0_cdb_mode_maj_getf(struct cl_chip *chip)
+{
+ u32 local_val = cl_reg_read_chip(chip, RICU_STATIC_CONF_0_ADDR);
+
+ return ((local_val & ((u32)0x0000000F)) >> 0);
+}
+
+static inline void ricu_static_conf_0_cdb_mode_maj_setf(struct cl_chip *chip, u8 cdbmodemaj)
+{
+ ASSERT_ERR_CHIP((((u32)cdbmodemaj << 0) & ~((u32)0x0000000F)) == 0);
+ cl_reg_write_chip(chip, RICU_STATIC_CONF_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_STATIC_CONF_0_ADDR) & ~((u32)0x0000000F)) |
+ ((u32)cdbmodemaj << 0));
+}
+
+/*
+ * @brief AFE_CTL_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 PBIAS_CTRL_EN_LC 0
+ * 30 PBIAS_CTRL_EN 0
+ * 29 LRD_EN_LC 0
+ * 28 LRD_EN 0
+ * 27 LOCK_EN_LC 0
+ * 26 LOCK_EN 1
+ * 25 EN_GPADC_CLK 0
+ * 24 EN_GPADC 0
+ * 23 FEED_EN_LC 0
+ * 22 FEED_EN 0
+ * 21 EN_CS 1
+ * 20 EN_CML_GEN 1
+ * 18 EN_AFE_LDO 1
+ * 17 EN_ADC_CLK 1
+ * 15 AFC_ENB_LC 0
+ * 14 AFC_ENB 0
+ * 13 CP_MODE_LC 1
+ * 12 BYPASS_LC 0
+ * 11 BYPASS 0
+ * 10 AFCINIT_SEL_LC 1
+ * 09 AFCINIT_SEL 1
+ * 08 EN_CLK_MON 0
+ * 07 EN_DAC_CLK 1
+ * 06 EN_CDB_DAC_CLK 0
+ * 05 EN_CDB_ADC_CLK 0
+ * 03 EN_CDB_GEN 0
+ * 02 DACCLK_PHASESEL 0
+ * 01 ADCCLK_PHASESEL 0
+ * 00 CDB_CLK_RESETB 0
+ * </pre>
+ */
+#define RICU_AFE_CTL_0_ADDR (REG_RICU_BASE_ADDR + 0x00000010)
+#define RICU_AFE_CTL_0_OFFSET 0x00000010
+#define RICU_AFE_CTL_0_INDEX 0x00000004
+#define RICU_AFE_CTL_0_RESET 0x04362680
+
+static inline u32 ricu_afe_ctl_0_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTL_0_ADDR);
+}
+
+static inline void ricu_afe_ctl_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_0_ADDR, value);
+}
+
+static inline void ricu_afe_ctl_0_lock_en_lc_setf(struct cl_chip *chip, u8 lockenlc)
+{
+ ASSERT_ERR_CHIP((((u32)lockenlc << 27) & ~((u32)0x08000000)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_0_ADDR) & ~((u32)0x08000000)) |
+ ((u32)lockenlc << 27));
+}
+
+/* Field definitions */
+#define RICU_AFE_CTL_0_PBIAS_CTRL_EN_LC_BIT ((u32)0x80000000)
+#define RICU_AFE_CTL_0_PBIAS_CTRL_EN_LC_POS 31
+#define RICU_AFE_CTL_0_PBIAS_CTRL_EN_BIT ((u32)0x40000000)
+#define RICU_AFE_CTL_0_PBIAS_CTRL_EN_POS 30
+#define RICU_AFE_CTL_0_LRD_EN_LC_BIT ((u32)0x20000000)
+#define RICU_AFE_CTL_0_LRD_EN_LC_POS 29
+#define RICU_AFE_CTL_0_LRD_EN_BIT ((u32)0x10000000)
+#define RICU_AFE_CTL_0_LRD_EN_POS 28
+#define RICU_AFE_CTL_0_LOCK_EN_LC_BIT ((u32)0x08000000)
+#define RICU_AFE_CTL_0_LOCK_EN_LC_POS 27
+#define RICU_AFE_CTL_0_LOCK_EN_BIT ((u32)0x04000000)
+#define RICU_AFE_CTL_0_LOCK_EN_POS 26
+#define RICU_AFE_CTL_0_EN_GPADC_CLK_BIT ((u32)0x02000000)
+#define RICU_AFE_CTL_0_EN_GPADC_CLK_POS 25
+#define RICU_AFE_CTL_0_EN_GPADC_BIT ((u32)0x01000000)
+#define RICU_AFE_CTL_0_EN_GPADC_POS 24
+#define RICU_AFE_CTL_0_FEED_EN_LC_BIT ((u32)0x00800000)
+#define RICU_AFE_CTL_0_FEED_EN_LC_POS 23
+#define RICU_AFE_CTL_0_FEED_EN_BIT ((u32)0x00400000)
+#define RICU_AFE_CTL_0_FEED_EN_POS 22
+#define RICU_AFE_CTL_0_EN_CS_BIT ((u32)0x00200000)
+#define RICU_AFE_CTL_0_EN_CS_POS 21
+#define RICU_AFE_CTL_0_EN_CML_GEN_BIT ((u32)0x00100000)
+#define RICU_AFE_CTL_0_EN_CML_GEN_POS 20
+#define RICU_AFE_CTL_0_EN_AFE_LDO_BIT ((u32)0x00040000)
+#define RICU_AFE_CTL_0_EN_AFE_LDO_POS 18
+#define RICU_AFE_CTL_0_EN_ADC_CLK_BIT ((u32)0x00020000)
+#define RICU_AFE_CTL_0_EN_ADC_CLK_POS 17
+#define RICU_AFE_CTL_0_AFC_ENB_LC_BIT ((u32)0x00008000)
+#define RICU_AFE_CTL_0_AFC_ENB_LC_POS 15
+#define RICU_AFE_CTL_0_AFC_ENB_BIT ((u32)0x00004000)
+#define RICU_AFE_CTL_0_AFC_ENB_POS 14
+#define RICU_AFE_CTL_0_CP_MODE_LC_BIT ((u32)0x00002000)
+#define RICU_AFE_CTL_0_CP_MODE_LC_POS 13
+#define RICU_AFE_CTL_0_BYPASS_LC_BIT ((u32)0x00001000)
+#define RICU_AFE_CTL_0_BYPASS_LC_POS 12
+#define RICU_AFE_CTL_0_BYPASS_BIT ((u32)0x00000800)
+#define RICU_AFE_CTL_0_BYPASS_POS 11
+#define RICU_AFE_CTL_0_AFCINIT_SEL_LC_BIT ((u32)0x00000400)
+#define RICU_AFE_CTL_0_AFCINIT_SEL_LC_POS 10
+#define RICU_AFE_CTL_0_AFCINIT_SEL_BIT ((u32)0x00000200)
+#define RICU_AFE_CTL_0_AFCINIT_SEL_POS 9
+#define RICU_AFE_CTL_0_EN_CLK_MON_BIT ((u32)0x00000100)
+#define RICU_AFE_CTL_0_EN_CLK_MON_POS 8
+#define RICU_AFE_CTL_0_EN_DAC_CLK_BIT ((u32)0x00000080)
+#define RICU_AFE_CTL_0_EN_DAC_CLK_POS 7
+#define RICU_AFE_CTL_0_EN_CDB_DAC_CLK_BIT ((u32)0x00000040)
+#define RICU_AFE_CTL_0_EN_CDB_DAC_CLK_POS 6
+#define RICU_AFE_CTL_0_EN_CDB_ADC_CLK_BIT ((u32)0x00000020)
+#define RICU_AFE_CTL_0_EN_CDB_ADC_CLK_POS 5
+#define RICU_AFE_CTL_0_EN_CDB_GEN_BIT ((u32)0x00000008)
+#define RICU_AFE_CTL_0_EN_CDB_GEN_POS 3
+#define RICU_AFE_CTL_0_DACCLK_PHASESEL_BIT ((u32)0x00000004)
+#define RICU_AFE_CTL_0_DACCLK_PHASESEL_POS 2
+#define RICU_AFE_CTL_0_ADCCLK_PHASESEL_BIT ((u32)0x00000002)
+#define RICU_AFE_CTL_0_ADCCLK_PHASESEL_POS 1
+#define RICU_AFE_CTL_0_CDB_CLK_RESETB_BIT ((u32)0x00000001)
+#define RICU_AFE_CTL_0_CDB_CLK_RESETB_POS 0
+
+static inline void ricu_afe_ctl_0_pbias_ctrl_en_lc_setf(struct cl_chip *chip, u8 pbiasctrlenlc)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_0_ADDR) & ~((u32)0x80000000)) |
+ ((u32)pbiasctrlenlc << 31));
+}
+
+static inline void ricu_afe_ctl_0_cdb_clk_resetb_setf(struct cl_chip *chip, u8 cdbclkresetb)
+{
+ ASSERT_ERR_CHIP((((u32)cdbclkresetb << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_0_ADDR) & ~((u32)0x00000001)) |
+ ((u32)cdbclkresetb << 0));
+}
+
+/*
+ * @brief AFE_CTL_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 14 VCO_BOOST 0
+ * 13 SYS_ADCCLK_SEL 0
+ * 12 SOC_PHASE_SEL 1
+ * 11 SOC_CLK_SEL 1
+ * 10 RESETB_LC 0
+ * 09 RESETB 1
+ * 08 PBIAS_CTRL_LC 0
+ * 07 PBIAS_CTRL 0
+ * 06 GP_CLK_PHASESEL 0
+ * 05 FSEL_LC 0
+ * 04 FSEL 0
+ * 03 FOUT_MASK_LC 0
+ * 02 FOUT_MASK 0
+ * 01 EXTCLK_SEL 0
+ * 00 EN_PLL_LDO 0
+ * </pre>
+ */
+#define RICU_AFE_CTL_1_ADDR (REG_RICU_BASE_ADDR + 0x00000014)
+#define RICU_AFE_CTL_1_OFFSET 0x00000014
+#define RICU_AFE_CTL_1_INDEX 0x00000005
+#define RICU_AFE_CTL_1_RESET 0x00001A00
+
+static inline void ricu_afe_ctl_1_resetb_lc_setf(struct cl_chip *chip, u8 resetblc)
+{
+ ASSERT_ERR_CHIP((((u32)resetblc << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_1_ADDR) & ~((u32)0x00000400)) |
+ ((u32)resetblc << 10));
+}
+
+static inline void ricu_afe_ctl_1_en_pll_ldo_setf(struct cl_chip *chip, u8 enpllldo)
+{
+ ASSERT_ERR_CHIP((((u32)enpllldo << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_1_ADDR) & ~((u32)0x00000001)) |
+ ((u32)enpllldo << 0));
+}
+
+/*
+ * @brief AFE_CTL_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:22 LOCK_CON_REV_LC 0x0
+ * 21:20 LOCK_CON_REV 0x0
+ * 19:18 LOCK_CON_OUT_LC 0x3
+ * 17:16 LOCK_CON_OUT 0x3
+ * 15:14 LOCK_CON_IN_LC 0x3
+ * 13:12 LOCK_CON_IN 0x3
+ * 11:10 LOCK_CON_DLY_LC 0x3
+ * 09:08 LOCK_CON_DLY 0x3
+ * 07:06 ICP 0x1
+ * 03:02 CTRL_IB 0x2
+ * 01:00 CLK_MON_SEL 0x0
+ * </pre>
+ */
+#define RICU_AFE_CTL_2_ADDR (REG_RICU_BASE_ADDR + 0x00000018)
+#define RICU_AFE_CTL_2_OFFSET 0x00000018
+#define RICU_AFE_CTL_2_INDEX 0x00000006
+#define RICU_AFE_CTL_2_RESET 0x000FFF48
+
+static inline void ricu_afe_ctl_2_lock_con_rev_lc_setf(struct cl_chip *chip, u8 lockconrevlc)
+{
+ ASSERT_ERR_CHIP((((u32)lockconrevlc << 22) & ~((u32)0x00C00000)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_2_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_2_ADDR) & ~((u32)0x00C00000)) |
+ ((u32)lockconrevlc << 22));
+}
+
+/*
+ * @brief AFE_CTL_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:28 RSEL 0x0
+ * 27:24 I_CSEL_LC 0xc
+ * 23:20 GM_LC 0xf
+ * 19:16 CSEL_LC 0x3
+ * 15:12 CML_SEL 0x9
+ * 11:09 S_LC 0x0
+ * 08:06 S 0x0
+ * 05:03 LBW_LC 0x7
+ * 02:00 ICP_LC 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTL_3_ADDR (REG_RICU_BASE_ADDR + 0x0000001C)
+#define RICU_AFE_CTL_3_OFFSET 0x0000001C
+#define RICU_AFE_CTL_3_INDEX 0x00000007
+#define RICU_AFE_CTL_3_RESET 0x0CF3903F
+
+static inline void ricu_afe_ctl_3_cml_sel_setf(struct cl_chip *chip, u8 cmlsel)
+{
+ ASSERT_ERR_CHIP((((u32)cmlsel << 12) & ~((u32)0x0000F000)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_3_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_3_ADDR) & ~((u32)0x0000F000)) |
+ ((u32)cmlsel << 12));
+}
+
+/*
+ * @brief AFE_CTL_5 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:18 MAIN_SEL_7_2 0x0
+ * 17:12 P_LC 0x1
+ * 11:06 P 0xA
+ * 05:00 CAP_BIAS_CODE_LC 0x4
+ * </pre>
+ */
+#define RICU_AFE_CTL_5_ADDR (REG_RICU_BASE_ADDR + 0x00000024)
+#define RICU_AFE_CTL_5_OFFSET 0x00000024
+#define RICU_AFE_CTL_5_INDEX 0x00000009
+#define RICU_AFE_CTL_5_RESET 0x00001284
+
+static inline void ricu_afe_ctl_5_main_sel_7_2_setf(struct cl_chip *chip, u8 mainsel72)
+{
+ ASSERT_ERR_CHIP((((u32)mainsel72 << 18) & ~((u32)0x00FC0000)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_5_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTL_5_ADDR) & ~((u32)0x00FC0000)) |
+ ((u32)mainsel72 << 18));
+}
+
+/*
+ * @brief AFE_CTL_8 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 EN_REF7 0
+ * 30 EN_REF6 0
+ * 29 EN_REF5 0
+ * 28 EN_REF4 0
+ * 27 EN_REF3 0
+ * 26 EN_REF2 0
+ * 25 EN_REF1 0
+ * 24 EN_REF0 0
+ * 23 EN_EXT_LOAD7 0
+ * 22 EN_EXT_LOAD6 0
+ * 21 EN_EXT_LOAD5 0
+ * 20 EN_EXT_LOAD4 0
+ * 19 EN_EXT_LOAD3 0
+ * 18 EN_EXT_LOAD2 0
+ * 17 EN_EXT_LOAD1 0
+ * 16 EN_EXT_LOAD0 0
+ * 15 CH_CML_SEL7 0
+ * 14 CH_CML_SEL6 0
+ * 13 CH_CML_SEL5 0
+ * 12 CH_CML_SEL4 0
+ * 11 CH_CML_SEL3 0
+ * 10 CH_CML_SEL2 0
+ * 09 CH_CML_SEL1 0
+ * 08 CH_CML_SEL0 0
+ * 07 EN_BGR7 0
+ * 06 EN_BGR6 0
+ * 05 EN_BGR5 0
+ * 04 EN_BGR4 0
+ * 03 EN_BGR3 0
+ * 02 EN_BGR2 0
+ * 01 EN_BGR1 0
+ * 00 EN_BGR0 0
+ * </pre>
+ */
+#define RICU_AFE_CTL_8_ADDR (REG_RICU_BASE_ADDR + 0x00000030)
+#define RICU_AFE_CTL_8_OFFSET 0x00000030
+#define RICU_AFE_CTL_8_INDEX 0x0000000C
+#define RICU_AFE_CTL_8_RESET 0x00000000
+
+static inline u32 ricu_afe_ctl_8_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTL_8_ADDR);
+}
+
+static inline void ricu_afe_ctl_8_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_8_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTL_8_EN_REF_7_BIT ((u32)0x80000000)
+#define RICU_AFE_CTL_8_EN_REF_7_POS 31
+#define RICU_AFE_CTL_8_EN_REF_6_BIT ((u32)0x40000000)
+#define RICU_AFE_CTL_8_EN_REF_6_POS 30
+#define RICU_AFE_CTL_8_EN_REF_5_BIT ((u32)0x20000000)
+#define RICU_AFE_CTL_8_EN_REF_5_POS 29
+#define RICU_AFE_CTL_8_EN_REF_4_BIT ((u32)0x10000000)
+#define RICU_AFE_CTL_8_EN_REF_4_POS 28
+#define RICU_AFE_CTL_8_EN_REF_3_BIT ((u32)0x08000000)
+#define RICU_AFE_CTL_8_EN_REF_3_POS 27
+#define RICU_AFE_CTL_8_EN_REF_2_BIT ((u32)0x04000000)
+#define RICU_AFE_CTL_8_EN_REF_2_POS 26
+#define RICU_AFE_CTL_8_EN_REF_1_BIT ((u32)0x02000000)
+#define RICU_AFE_CTL_8_EN_REF_1_POS 25
+#define RICU_AFE_CTL_8_EN_REF_0_BIT ((u32)0x01000000)
+#define RICU_AFE_CTL_8_EN_REF_0_POS 24
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_7_BIT ((u32)0x00800000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_7_POS 23
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_6_BIT ((u32)0x00400000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_6_POS 22
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_5_BIT ((u32)0x00200000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_5_POS 21
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_4_BIT ((u32)0x00100000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_4_POS 20
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_3_BIT ((u32)0x00080000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_3_POS 19
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_2_BIT ((u32)0x00040000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_2_POS 18
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_1_BIT ((u32)0x00020000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_1_POS 17
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_0_BIT ((u32)0x00010000)
+#define RICU_AFE_CTL_8_EN_EXT_LOAD_0_POS 16
+#define RICU_AFE_CTL_8_CH_CML_SEL_7_BIT ((u32)0x00008000)
+#define RICU_AFE_CTL_8_CH_CML_SEL_7_POS 15
+#define RICU_AFE_CTL_8_CH_CML_SEL_6_BIT ((u32)0x00004000)
+#define RICU_AFE_CTL_8_CH_CML_SEL_6_POS 14
+#define RICU_AFE_CTL_8_CH_CML_SEL_5_BIT ((u32)0x00002000)
+#define RICU_AFE_CTL_8_CH_CML_SEL_5_POS 13
+#define RICU_AFE_CTL_8_CH_CML_SEL_4_BIT ((u32)0x00001000)
+#define RICU_AFE_CTL_8_CH_CML_SEL_4_POS 12
+#define RICU_AFE_CTL_8_CH_CML_SEL_3_BIT ((u32)0x00000800)
+#define RICU_AFE_CTL_8_CH_CML_SEL_3_POS 11
+#define RICU_AFE_CTL_8_CH_CML_SEL_2_BIT ((u32)0x00000400)
+#define RICU_AFE_CTL_8_CH_CML_SEL_2_POS 10
+#define RICU_AFE_CTL_8_CH_CML_SEL_1_BIT ((u32)0x00000200)
+#define RICU_AFE_CTL_8_CH_CML_SEL_1_POS 9
+#define RICU_AFE_CTL_8_CH_CML_SEL_0_BIT ((u32)0x00000100)
+#define RICU_AFE_CTL_8_CH_CML_SEL_0_POS 8
+#define RICU_AFE_CTL_8_EN_BGR_7_BIT ((u32)0x00000080)
+#define RICU_AFE_CTL_8_EN_BGR_7_POS 7
+#define RICU_AFE_CTL_8_EN_BGR_6_BIT ((u32)0x00000040)
+#define RICU_AFE_CTL_8_EN_BGR_6_POS 6
+#define RICU_AFE_CTL_8_EN_BGR_5_BIT ((u32)0x00000020)
+#define RICU_AFE_CTL_8_EN_BGR_5_POS 5
+#define RICU_AFE_CTL_8_EN_BGR_4_BIT ((u32)0x00000010)
+#define RICU_AFE_CTL_8_EN_BGR_4_POS 4
+#define RICU_AFE_CTL_8_EN_BGR_3_BIT ((u32)0x00000008)
+#define RICU_AFE_CTL_8_EN_BGR_3_POS 3
+#define RICU_AFE_CTL_8_EN_BGR_2_BIT ((u32)0x00000004)
+#define RICU_AFE_CTL_8_EN_BGR_2_POS 2
+#define RICU_AFE_CTL_8_EN_BGR_1_BIT ((u32)0x00000002)
+#define RICU_AFE_CTL_8_EN_BGR_1_POS 1
+#define RICU_AFE_CTL_8_EN_BGR_0_BIT ((u32)0x00000001)
+#define RICU_AFE_CTL_8_EN_BGR_0_POS 0
+
+/*
+ * @brief AFE_CTL_9 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 EN_SIN2_BIAS7 1
+ * 30 EN_SIN2_BIAS6 1
+ * 29 EN_SIN2_BIAS5 1
+ * 28 EN_SIN2_BIAS4 1
+ * 27 EN_SIN2_BIAS3 1
+ * 26 EN_SIN2_BIAS2 1
+ * 25 EN_SIN2_BIAS1 1
+ * 24 EN_SIN2_BIAS0 1
+ * 23 EN_DAC_REF7 0
+ * 22 EN_DAC_REF6 0
+ * 21 EN_DAC_REF5 0
+ * 20 EN_DAC_REF4 0
+ * 19 EN_DAC_REF3 0
+ * 18 EN_DAC_REF2 0
+ * 17 EN_DAC_REF1 0
+ * 16 EN_DAC_REF0 0
+ * </pre>
+ */
+#define RICU_AFE_CTL_9_ADDR (REG_RICU_BASE_ADDR + 0x00000034)
+#define RICU_AFE_CTL_9_OFFSET 0x00000034
+#define RICU_AFE_CTL_9_INDEX 0x0000000D
+#define RICU_AFE_CTL_9_RESET 0xFF000000
+
+static inline u32 ricu_afe_ctl_9_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTL_9_ADDR);
+}
+
+static inline void ricu_afe_ctl_9_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_9_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_7_BIT ((u32)0x80000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_7_POS 31
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_6_BIT ((u32)0x40000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_6_POS 30
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_5_BIT ((u32)0x20000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_5_POS 29
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_4_BIT ((u32)0x10000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_4_POS 28
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_3_BIT ((u32)0x08000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_3_POS 27
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_2_BIT ((u32)0x04000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_2_POS 26
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_1_BIT ((u32)0x02000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_1_POS 25
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_0_BIT ((u32)0x01000000)
+#define RICU_AFE_CTL_9_EN_SIN_2_BIAS_0_POS 24
+#define RICU_AFE_CTL_9_EN_DAC_REF_7_BIT ((u32)0x00800000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_7_POS 23
+#define RICU_AFE_CTL_9_EN_DAC_REF_6_BIT ((u32)0x00400000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_6_POS 22
+#define RICU_AFE_CTL_9_EN_DAC_REF_5_BIT ((u32)0x00200000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_5_POS 21
+#define RICU_AFE_CTL_9_EN_DAC_REF_4_BIT ((u32)0x00100000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_4_POS 20
+#define RICU_AFE_CTL_9_EN_DAC_REF_3_BIT ((u32)0x00080000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_3_POS 19
+#define RICU_AFE_CTL_9_EN_DAC_REF_2_BIT ((u32)0x00040000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_2_POS 18
+#define RICU_AFE_CTL_9_EN_DAC_REF_1_BIT ((u32)0x00020000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_1_POS 17
+#define RICU_AFE_CTL_9_EN_DAC_REF_0_BIT ((u32)0x00010000)
+#define RICU_AFE_CTL_9_EN_DAC_REF_0_POS 16
+
+/*
+ * @brief AFE_CTL_10 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 VC_LD7 0
+ * 30 VC_LD6 0
+ * 29 VC_LD5 0
+ * 28 VC_LD4 0
+ * 27 VC_LD3 0
+ * 26 VC_LD2 0
+ * 25 VC_LD1 0
+ * 24 VC_LD0 0
+ * 23 TWOS7 0
+ * 22 TWOS6 0
+ * 21 TWOS5 0
+ * 20 TWOS4 0
+ * 19 TWOS3 0
+ * 18 TWOS2 0
+ * 17 TWOS1 0
+ * 16 TWOS0 0
+ * 07 MINV7 1
+ * 06 MINV6 1
+ * 05 MINV5 1
+ * 04 MINV4 1
+ * 03 MINV3 1
+ * 02 MINV2 1
+ * 01 MINV1 1
+ * 00 MINV0 1
+ * </pre>
+ */
+#define RICU_AFE_CTL_10_ADDR (REG_RICU_BASE_ADDR + 0x00000038)
+#define RICU_AFE_CTL_10_OFFSET 0x00000038
+#define RICU_AFE_CTL_10_INDEX 0x0000000E
+#define RICU_AFE_CTL_10_RESET 0x000000FF
+
+static inline void ricu_afe_ctl_10_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_10_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_12 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:30 EOC_CTRL7 0x0
+ * 29:28 EOC_CTRL6 0x0
+ * 27:26 EOC_CTRL5 0x0
+ * 25:24 EOC_CTRL4 0x0
+ * 23:22 EOC_CTRL3 0x0
+ * 21:20 EOC_CTRL2 0x0
+ * 19:18 EOC_CTRL1 0x0
+ * 17:16 EOC_CTRL0 0x0
+ * 15:14 IC_REFSSF7 0x1
+ * 13:12 IC_REFSSF6 0x1
+ * 11:10 IC_REFSSF5 0x1
+ * 09:08 IC_REFSSF4 0x1
+ * 07:06 IC_REFSSF3 0x1
+ * 05:04 IC_REFSSF2 0x1
+ * 03:02 IC_REFSSF1 0x1
+ * 01:00 IC_REFSSF0 0x1
+ * </pre>
+ */
+#define RICU_AFE_CTL_12_ADDR (REG_RICU_BASE_ADDR + 0x00000040)
+#define RICU_AFE_CTL_12_OFFSET 0x00000040
+#define RICU_AFE_CTL_12_INDEX 0x00000010
+#define RICU_AFE_CTL_12_RESET 0x00005555
+
+static inline void ricu_afe_ctl_12_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_12_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTL_12_EOC_CTRL_7_MASK ((u32)0xC0000000)
+#define RICU_AFE_CTL_12_EOC_CTRL_7_LSB 30
+#define RICU_AFE_CTL_12_EOC_CTRL_7_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_6_MASK ((u32)0x30000000)
+#define RICU_AFE_CTL_12_EOC_CTRL_6_LSB 28
+#define RICU_AFE_CTL_12_EOC_CTRL_6_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_5_MASK ((u32)0x0C000000)
+#define RICU_AFE_CTL_12_EOC_CTRL_5_LSB 26
+#define RICU_AFE_CTL_12_EOC_CTRL_5_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_4_MASK ((u32)0x03000000)
+#define RICU_AFE_CTL_12_EOC_CTRL_4_LSB 24
+#define RICU_AFE_CTL_12_EOC_CTRL_4_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_3_MASK ((u32)0x00C00000)
+#define RICU_AFE_CTL_12_EOC_CTRL_3_LSB 22
+#define RICU_AFE_CTL_12_EOC_CTRL_3_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_2_MASK ((u32)0x00300000)
+#define RICU_AFE_CTL_12_EOC_CTRL_2_LSB 20
+#define RICU_AFE_CTL_12_EOC_CTRL_2_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_1_MASK ((u32)0x000C0000)
+#define RICU_AFE_CTL_12_EOC_CTRL_1_LSB 18
+#define RICU_AFE_CTL_12_EOC_CTRL_1_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_EOC_CTRL_0_MASK ((u32)0x00030000)
+#define RICU_AFE_CTL_12_EOC_CTRL_0_LSB 16
+#define RICU_AFE_CTL_12_EOC_CTRL_0_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_7_MASK ((u32)0x0000C000)
+#define RICU_AFE_CTL_12_IC_REFSSF_7_LSB 14
+#define RICU_AFE_CTL_12_IC_REFSSF_7_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_6_MASK ((u32)0x00003000)
+#define RICU_AFE_CTL_12_IC_REFSSF_6_LSB 12
+#define RICU_AFE_CTL_12_IC_REFSSF_6_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_5_MASK ((u32)0x00000C00)
+#define RICU_AFE_CTL_12_IC_REFSSF_5_LSB 10
+#define RICU_AFE_CTL_12_IC_REFSSF_5_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_4_MASK ((u32)0x00000300)
+#define RICU_AFE_CTL_12_IC_REFSSF_4_LSB 8
+#define RICU_AFE_CTL_12_IC_REFSSF_4_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_3_MASK ((u32)0x000000C0)
+#define RICU_AFE_CTL_12_IC_REFSSF_3_LSB 6
+#define RICU_AFE_CTL_12_IC_REFSSF_3_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_2_MASK ((u32)0x00000030)
+#define RICU_AFE_CTL_12_IC_REFSSF_2_LSB 4
+#define RICU_AFE_CTL_12_IC_REFSSF_2_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_1_MASK ((u32)0x0000000C)
+#define RICU_AFE_CTL_12_IC_REFSSF_1_LSB 2
+#define RICU_AFE_CTL_12_IC_REFSSF_1_WIDTH ((u32)0x00000002)
+#define RICU_AFE_CTL_12_IC_REFSSF_0_MASK ((u32)0x00000003)
+#define RICU_AFE_CTL_12_IC_REFSSF_0_LSB 0
+#define RICU_AFE_CTL_12_IC_REFSSF_0_WIDTH ((u32)0x00000002)
+
+/*
+ * @brief AFE_CTL_13 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 FORCE_ADC_ON_PHY1 0
+ * 30 FORCE_ADC_ON_PHY0 0
+ * 16 EN_LB8_AUX 0
+ * 15 EN_LB7 0
+ * 14 EN_LB6 0
+ * 13 EN_LB5 0
+ * 12 EN_LB4 0
+ * 11 EN_LB3 0
+ * 10 EN_LB2 0
+ * 09 EN_LB1 0
+ * 08 EN_LB0 0
+ * </pre>
+ */
+#define RICU_AFE_CTL_13_ADDR (REG_RICU_BASE_ADDR + 0x00000044)
+#define RICU_AFE_CTL_13_OFFSET 0x00000044
+#define RICU_AFE_CTL_13_INDEX 0x00000011
+#define RICU_AFE_CTL_13_RESET 0x00000000
+
+static inline void ricu_afe_ctl_13_pack(struct cl_chip *chip, u8 forceadconphy1, u8 forceadconphy0,
+ u8 enlb8aux, u8 enlb7, u8 enlb6, u8 enlb5, u8 enlb4,
+ u8 enlb3, u8 enlb2, u8 enlb1, u8 enlb0)
+{
+ ASSERT_ERR_CHIP((((u32)forceadconphy0 << 30) & ~((u32)0x40000000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb8aux << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb7 << 15) & ~((u32)0x00008000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb6 << 14) & ~((u32)0x00004000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb5 << 13) & ~((u32)0x00002000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb4 << 12) & ~((u32)0x00001000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb3 << 11) & ~((u32)0x00000800)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb2 << 10) & ~((u32)0x00000400)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb1 << 9) & ~((u32)0x00000200)) == 0);
+ ASSERT_ERR_CHIP((((u32)enlb0 << 8) & ~((u32)0x00000100)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_13_ADDR,
+ ((u32)forceadconphy1 << 31) | ((u32)forceadconphy0 << 30) |
+ ((u32)enlb8aux << 16) | ((u32)enlb7 << 15) | ((u32)enlb6 << 14) |
+ ((u32)enlb5 << 13) | ((u32)enlb4 << 12) | ((u32)enlb3 << 11) |
+ ((u32)enlb2 << 10) | ((u32)enlb1 << 9) | ((u32)enlb0 << 8));
+}
+
+/*
+ * @brief AFE_CTL_15 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30:28 EN_OUT_CM7 0x0
+ * 26:24 EN_OUT_CM6 0x0
+ * 22:20 EN_OUT_CM5 0x0
+ * 18:16 EN_OUT_CM4 0x0
+ * 14:12 EN_OUT_CM3 0x0
+ * 10:08 EN_OUT_CM2 0x0
+ * 06:04 EN_OUT_CM1 0x0
+ * 02:00 EN_OUT_CM0 0x0
+ * </pre>
+ */
+#define RICU_AFE_CTL_15_ADDR (REG_RICU_BASE_ADDR + 0x0000004C)
+#define RICU_AFE_CTL_15_OFFSET 0x0000004C
+#define RICU_AFE_CTL_15_INDEX 0x00000013
+#define RICU_AFE_CTL_15_RESET 0x00000000
+
+static inline void ricu_afe_ctl_15_pack(struct cl_chip *chip, u8 enoutcm7, u8 enoutcm6,
+ u8 enoutcm5, u8 enoutcm4, u8 enoutcm3,
+ u8 enoutcm2, u8 enoutcm1, u8 enoutcm0)
+{
+ ASSERT_ERR_CHIP((((u32)enoutcm7 << 28) & ~((u32)0x70000000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm6 << 24) & ~((u32)0x07000000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm5 << 20) & ~((u32)0x00700000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm4 << 16) & ~((u32)0x00070000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm3 << 12) & ~((u32)0x00007000)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm2 << 8) & ~((u32)0x00000700)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm1 << 4) & ~((u32)0x00000070)) == 0);
+ ASSERT_ERR_CHIP((((u32)enoutcm0 << 0) & ~((u32)0x00000007)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_15_ADDR,
+ ((u32)enoutcm7 << 28) | ((u32)enoutcm6 << 24) | ((u32)enoutcm5 << 20) |
+ ((u32)enoutcm4 << 16) | ((u32)enoutcm3 << 12) | ((u32)enoutcm2 << 8) |
+ ((u32)enoutcm1 << 4) | ((u32)enoutcm0 << 0));
+}
+
+/*
+ * @brief AFE_CTL_17 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30:28 VC_REF7 0x1
+ * 26:24 VC_REF6 0x1
+ * 22:20 VC_REF5 0x1
+ * 18:16 VC_REF4 0x1
+ * 14:12 VC_REF3 0x1
+ * 10:08 VC_REF2 0x1
+ * 06:04 VC_REF1 0x1
+ * 02:00 VC_REF0 0x1
+ * </pre>
+ */
+#define RICU_AFE_CTL_17_ADDR (REG_RICU_BASE_ADDR + 0x00000054)
+#define RICU_AFE_CTL_17_OFFSET 0x00000054
+#define RICU_AFE_CTL_17_INDEX 0x00000015
+#define RICU_AFE_CTL_17_RESET 0x11111111
+
+static inline void ricu_afe_ctl_17_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_17_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_19 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:28 COMP_CTRL7 0x4
+ * 27:24 COMP_CTRL6 0x4
+ * 23:20 COMP_CTRL5 0x4
+ * 19:16 COMP_CTRL4 0x4
+ * 15:12 COMP_CTRL3 0x4
+ * 11:08 COMP_CTRL2 0x4
+ * 07:04 COMP_CTRL1 0x4
+ * 03:00 COMP_CTRL0 0x4
+ * </pre>
+ */
+#define RICU_AFE_CTL_19_ADDR (REG_RICU_BASE_ADDR + 0x0000005C)
+#define RICU_AFE_CTL_19_OFFSET 0x0000005C
+#define RICU_AFE_CTL_19_INDEX 0x00000017
+#define RICU_AFE_CTL_19_RESET 0x44444444
+
+static inline void ricu_afe_ctl_19_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_19_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_23 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30:28 VC_LD_AVDI7 0x3
+ * 26:24 VC_LD_AVDI6 0x3
+ * 22:20 VC_LD_AVDI5 0x3
+ * 18:16 VC_LD_AVDI4 0x3
+ * 14:12 VC_LD_AVDI3 0x3
+ * 10:08 VC_LD_AVDI2 0x3
+ * 06:04 VC_LD_AVDI1 0x3
+ * 02:00 VC_LD_AVDI0 0x3
+ * </pre>
+ */
+#define RICU_AFE_CTL_23_ADDR (REG_RICU_BASE_ADDR + 0x0000006C)
+#define RICU_AFE_CTL_23_OFFSET 0x0000006C
+#define RICU_AFE_CTL_23_INDEX 0x0000001B
+#define RICU_AFE_CTL_23_RESET 0x33333333
+
+static inline void ricu_afe_ctl_23_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_23_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_24 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30:28 VC_LD_AVDQ7 0x3
+ * 26:24 VC_LD_AVDQ6 0x3
+ * 22:20 VC_LD_AVDQ5 0x3
+ * 18:16 VC_LD_AVDQ4 0x3
+ * 14:12 VC_LD_AVDQ3 0x3
+ * 10:08 VC_LD_AVDQ2 0x3
+ * 06:04 VC_LD_AVDQ1 0x3
+ * 02:00 VC_LD_AVDQ0 0x3
+ * </pre>
+ */
+#define RICU_AFE_CTL_24_ADDR (REG_RICU_BASE_ADDR + 0x00000070)
+#define RICU_AFE_CTL_24_OFFSET 0x00000070
+#define RICU_AFE_CTL_24_INDEX 0x0000001C
+#define RICU_AFE_CTL_24_RESET 0x33333333
+
+static inline void ricu_afe_ctl_24_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_24_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_25 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL0 0
+ * 14:08 RO_CTRLQ0 0x7
+ * 06:00 RO_CTRLI0 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTL_25_ADDR (REG_RICU_BASE_ADDR + 0x00000074)
+#define RICU_AFE_CTL_25_OFFSET 0x00000074
+#define RICU_AFE_CTL_25_INDEX 0x0000001D
+#define RICU_AFE_CTL_25_RESET 0x00000707
+
+static inline void ricu_afe_ctl_25_pack(struct cl_chip *chip, u8 rosel0, u8 roctrlq0, u8 roctrli0)
+{
+ ASSERT_ERR_CHIP((((u32)rosel0 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq0 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli0 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_25_ADDR,
+ ((u32)rosel0 << 16) | ((u32)roctrlq0 << 8) | ((u32)roctrli0 << 0));
+}
+
+/*
+ * @brief AFE_CTL_26 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL1 0
+ * 14:08 RO_CTRLQ1 0x7
+ * 06:00 RO_CTRLI1 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTL_26_ADDR (REG_RICU_BASE_ADDR + 0x00000078)
+#define RICU_AFE_CTL_26_OFFSET 0x00000078
+#define RICU_AFE_CTL_26_INDEX 0x0000001E
+#define RICU_AFE_CTL_26_RESET 0x00000707
+
+static inline void ricu_afe_ctl_26_pack(struct cl_chip *chip, u8 rosel1, u8 roctrlq1, u8 roctrli1)
+{
+ ASSERT_ERR_CHIP((((u32)rosel1 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq1 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli1 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_26_ADDR,
+ ((u32)rosel1 << 16) | ((u32)roctrlq1 << 8) | ((u32)roctrli1 << 0));
+}
+
+/*
+ * @brief AFE_CTL_27 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL2 0
+ * 14:08 RO_CTRLQ2 0x7
+ * 06:00 RO_CTRLI2 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTL_27_ADDR (REG_RICU_BASE_ADDR + 0x0000007C)
+#define RICU_AFE_CTL_27_OFFSET 0x0000007C
+#define RICU_AFE_CTL_27_INDEX 0x0000001F
+#define RICU_AFE_CTL_27_RESET 0x00000707
+
+static inline void ricu_afe_ctl_27_pack(struct cl_chip *chip, u8 rosel2, u8 roctrlq2, u8 roctrli2)
+{
+ ASSERT_ERR_CHIP((((u32)rosel2 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq2 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli2 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_27_ADDR,
+ ((u32)rosel2 << 16) | ((u32)roctrlq2 << 8) | ((u32)roctrli2 << 0));
+}
+
+/*
+ * @brief AFE_CTL_29 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30:28 VC_CML7_I 0x5
+ * 26:24 VC_CML6_I 0x5
+ * 22:20 VC_CML5_I 0x5
+ * 18:16 VC_CML4_I 0x5
+ * 14:12 VC_CML3_I 0x5
+ * 10:08 VC_CML2_I 0x5
+ * 06:04 VC_CML1_I 0x5
+ * 02:00 VC_CML0_I 0x5
+ * </pre>
+ */
+#define RICU_AFE_CTL_29_ADDR (REG_RICU_BASE_ADDR + 0x00000084)
+#define RICU_AFE_CTL_29_OFFSET 0x00000084
+#define RICU_AFE_CTL_29_INDEX 0x00000021
+#define RICU_AFE_CTL_29_RESET 0x55555555
+
+static inline void ricu_afe_ctl_29_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_29_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_30 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 30:28 VC_CML7_Q 0x5
+ * 26:24 VC_CML6_Q 0x5
+ * 22:20 VC_CML5_Q 0x5
+ * 18:16 VC_CML4_Q 0x5
+ * 14:12 VC_CML3_Q 0x5
+ * 10:08 VC_CML2_Q 0x5
+ * 06:04 VC_CML1_Q 0x5
+ * 02:00 VC_CML0_Q 0x5
+ * </pre>
+ */
+#define RICU_AFE_CTL_30_ADDR (REG_RICU_BASE_ADDR + 0x00000088)
+#define RICU_AFE_CTL_30_OFFSET 0x00000088
+#define RICU_AFE_CTL_30_INDEX 0x00000022
+#define RICU_AFE_CTL_30_RESET 0x55555555
+
+static inline void ricu_afe_ctl_30_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTL_30_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTL_33 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL3 0
+ * 14:08 RO_CTRL3_Q 0x7
+ * 06:00 RO_CTRL3_I 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTL_33_ADDR (REG_RICU_BASE_ADDR + 0x00000094)
+#define RICU_AFE_CTL_33_OFFSET 0x00000094
+#define RICU_AFE_CTL_33_INDEX 0x00000025
+#define RICU_AFE_CTL_33_RESET 0x00000707
+
+static inline void ricu_afe_ctl_33_pack(struct cl_chip *chip, u8 rosel3, u8 roctrl3q, u8 roctrl3i)
+{
+ ASSERT_ERR_CHIP((((u32)rosel3 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrl3q << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrl3i << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTL_33_ADDR,
+ ((u32)rosel3 << 16) | ((u32)roctrl3q << 8) | ((u32)roctrl3i << 0));
+}
+
+/*
+ * @brief AFE_CTRL_34_PHY_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 06 PHY0_ADC_SB_IGNORE_FIFO_INDICATION 0
+ * 05:02 PHY0_ADC_SB_RD_DELAY 0x4
+ * 01:00 PHY0_ADC_SB_MODE 0x0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_34_PHY_0_ADDR (REG_RICU_BASE_ADDR + 0x0000009C)
+#define RICU_AFE_CTRL_34_PHY_0_OFFSET 0x0000009C
+#define RICU_AFE_CTRL_34_PHY_0_INDEX 0x00000027
+#define RICU_AFE_CTRL_34_PHY_0_RESET 0x00000010
+
+static inline
+void ricu_afe_ctrl_34_phy_0_adc_sb_ignore_fifo_indication_setf(struct cl_chip *chip,
+ u8 phy0adcsbignorefifoindication)
+{
+ ASSERT_ERR_CHIP((((u32)phy0adcsbignorefifoindication << 6) & ~((u32)0x00000040)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_34_PHY_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_34_PHY_0_ADDR) &
+ ~((u32)0x00000040)) | ((u32)phy0adcsbignorefifoindication << 6));
+}
+
+static inline void ricu_afe_ctrl_34_phy_0_adc_sb_rd_delay_setf(struct cl_chip *chip,
+ u8 phy0adcsbrddelay)
+{
+ ASSERT_ERR_CHIP((((u32)phy0adcsbrddelay << 2) & ~((u32)0x0000003C)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_34_PHY_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_34_PHY_0_ADDR) &
+ ~((u32)0x0000003C)) | ((u32)phy0adcsbrddelay << 2));
+}
+
+/*
+ * @brief AFE_CTRL_36_PHY_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 07 PHY0_ADC_ALWAYS_EN_LD_IR 0
+ * 06 PHY0_ADC_ALWAYS_EN_LD_AVDQ 0
+ * 05 PHY0_ADC_ALWAYS_EN_LD_AVDI 0
+ * 04 PHY0_ADC_ALWAYS_EN_ADCQ 0
+ * 03 PHY0_ADC_ALWAYS_EN_ADCI 0
+ * 01 PHY0_HW_MODE_DAC 0
+ * 00 PHY0_HW_MODE_ADC 0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_36_PHY_0_ADDR (REG_RICU_BASE_ADDR + 0x000000A0)
+#define RICU_AFE_CTRL_36_PHY_0_OFFSET 0x000000A0
+#define RICU_AFE_CTRL_36_PHY_0_INDEX 0x00000028
+#define RICU_AFE_CTRL_36_PHY_0_RESET 0x00000000
+
+static inline u32 ricu_afe_ctrl_36_phy_0_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTRL_36_PHY_0_ADDR);
+}
+
+static inline void ricu_afe_ctrl_36_phy_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_36_PHY_0_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_LD_IR_BIT ((u32)0x00000080)
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_LD_IR_POS 7
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_LD_AVDQ_BIT ((u32)0x00000040)
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_LD_AVDQ_POS 6
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_LD_AVDI_BIT ((u32)0x00000020)
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_LD_AVDI_POS 5
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_ADCQ_BIT ((u32)0x00000010)
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_ADCQ_POS 4
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_ADCI_BIT ((u32)0x00000008)
+#define RICU_AFE_CTRL_36_PHY_0_ADC_ALWAYS_EN_ADCI_POS 3
+#define RICU_AFE_CTRL_36_PHY_0_HW_MODE_DAC_BIT ((u32)0x00000002)
+#define RICU_AFE_CTRL_36_PHY_0_HW_MODE_DAC_POS 1
+#define RICU_AFE_CTRL_36_PHY_0_HW_MODE_ADC_BIT ((u32)0x00000001)
+#define RICU_AFE_CTRL_36_PHY_0_HW_MODE_ADC_POS 0
+
+static inline void ricu_afe_ctrl_36_phy_0_hw_mode_dac_setf(struct cl_chip *chip, u8 phy0hwmodedac)
+{
+ ASSERT_ERR_CHIP((((u32)phy0hwmodedac << 1) & ~((u32)0x00000002)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_36_PHY_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_36_PHY_0_ADDR) &
+ ~((u32)0x00000002)) | ((u32)phy0hwmodedac << 1));
+}
+
+static inline void ricu_afe_ctrl_36_phy_0_hw_mode_adc_setf(struct cl_chip *chip, u8 phy0hwmodeadc)
+{
+ ASSERT_ERR_CHIP((((u32)phy0hwmodeadc << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_36_PHY_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_36_PHY_0_ADDR) &
+ ~((u32)0x00000001)) | ((u32)phy0hwmodeadc << 0));
+}
+
+/*
+ * @brief AFE_CTRL_34_PHY_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 06 PHY1_ADC_SB_IGNORE_FIFO_INDICATION 0
+ * 05:02 PHY1_ADC_SB_RD_DELAY 0x4
+ * 01:00 PHY1_ADC_SB_MODE 0x0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_34_PHY_1_ADDR (REG_RICU_BASE_ADDR + 0x000000A4)
+#define RICU_AFE_CTRL_34_PHY_1_OFFSET 0x000000A4
+#define RICU_AFE_CTRL_34_PHY_1_INDEX 0x00000029
+#define RICU_AFE_CTRL_34_PHY_1_RESET 0x00000010
+
+static inline
+void ricu_afe_ctrl_34_phy_1_adc_sb_ignore_fifo_indication_setf(struct cl_chip *chip,
+ u8 phy1adcsbignorefifoindication)
+{
+ ASSERT_ERR_CHIP((((u32)phy1adcsbignorefifoindication << 6) & ~((u32)0x00000040)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_34_PHY_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_34_PHY_1_ADDR) &
+ ~((u32)0x00000040)) | ((u32)phy1adcsbignorefifoindication << 6));
+}
+
+static inline void ricu_afe_ctrl_34_phy_1_adc_sb_rd_delay_setf(struct cl_chip *chip,
+ u8 phy1adcsbrddelay)
+{
+ ASSERT_ERR_CHIP((((u32)phy1adcsbrddelay << 2) & ~((u32)0x0000003C)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_34_PHY_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_34_PHY_1_ADDR) &
+ ~((u32)0x0000003C)) | ((u32)phy1adcsbrddelay << 2));
+}
+
+/*
+ * @brief AFE_CTRL_35_PHY_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 06 PHY0_DAC_SB_IGNORE_FIFO_INDICATION 0
+ * 05:02 PHY0_DAC_SB_RD_DELAY 0x1
+ * 01:00 PHY0_DAC_SB_MODE 0x0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_35_PHY_0_ADDR (REG_RICU_BASE_ADDR + 0x000000A8)
+#define RICU_AFE_CTRL_35_PHY_0_OFFSET 0x000000A8
+#define RICU_AFE_CTRL_35_PHY_0_INDEX 0x0000002A
+#define RICU_AFE_CTRL_35_PHY_0_RESET 0x00000004
+
+static inline
+void ricu_afe_ctrl_35_phy_0_dac_sb_ignore_fifo_indication_setf(struct cl_chip *chip,
+ u8 phy0dacsbignorefifoindication)
+{
+ ASSERT_ERR_CHIP((((u32)phy0dacsbignorefifoindication << 6) & ~((u32)0x00000040)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_35_PHY_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_35_PHY_0_ADDR) &
+ ~((u32)0x00000040)) | ((u32)phy0dacsbignorefifoindication << 6));
+}
+
+static inline void ricu_afe_ctrl_35_phy_0_dac_sb_rd_delay_setf(struct cl_chip *chip,
+ u8 phy0dacsbrddelay)
+{
+ ASSERT_ERR_CHIP((((u32)phy0dacsbrddelay << 2) & ~((u32)0x0000003C)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_35_PHY_0_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_35_PHY_0_ADDR) &
+ ~((u32)0x0000003C)) | ((u32)phy0dacsbrddelay << 2));
+}
+
+/*
+ * @brief AFE_CTRL_35_PHY_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 06 PHY1_DAC_SB_IGNORE_FIFO_INDICATION 0
+ * 05:02 PHY1_DAC_SB_RD_DELAY 0x1
+ * 01:00 PHY1_DAC_SB_MODE 0x0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_35_PHY_1_ADDR (REG_RICU_BASE_ADDR + 0x000000AC)
+#define RICU_AFE_CTRL_35_PHY_1_OFFSET 0x000000AC
+#define RICU_AFE_CTRL_35_PHY_1_INDEX 0x0000002B
+#define RICU_AFE_CTRL_35_PHY_1_RESET 0x00000004
+
+static inline
+void ricu_afe_ctrl_35_phy_1_dac_sb_ignore_fifo_indication_setf(struct cl_chip *chip,
+ u8 phy1dacsbignorefifoindication)
+{
+ ASSERT_ERR_CHIP((((u32)phy1dacsbignorefifoindication << 6) & ~((u32)0x00000040)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_35_PHY_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_35_PHY_1_ADDR) &
+ ~((u32)0x00000040)) | ((u32)phy1dacsbignorefifoindication << 6));
+}
+
+static inline void ricu_afe_ctrl_35_phy_1_dac_sb_rd_delay_setf(struct cl_chip *chip,
+ u8 phy1dacsbrddelay)
+{
+ ASSERT_ERR_CHIP((((u32)phy1dacsbrddelay << 2) & ~((u32)0x0000003C)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_35_PHY_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_35_PHY_1_ADDR) &
+ ~((u32)0x0000003C)) | ((u32)phy1dacsbrddelay << 2));
+}
+
+/*
+ * @brief AFE_CTRL_37_PHY_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 05 PHY0_EN_DAC5 0
+ * 04 PHY0_EN_DAC4 0
+ * 03 PHY0_EN_DAC3 0
+ * 02 PHY0_EN_DAC2 0
+ * 01 PHY0_EN_DAC1 0
+ * 00 PHY0_EN_DAC0 0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_37_PHY_0_ADDR (REG_RICU_BASE_ADDR + 0x000000BC)
+#define RICU_AFE_CTRL_37_PHY_0_OFFSET 0x000000BC
+#define RICU_AFE_CTRL_37_PHY_0_INDEX 0x0000002F
+#define RICU_AFE_CTRL_37_PHY_0_RESET 0x00000000
+
+static inline u32 ricu_afe_ctrl_37_phy_0_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTRL_37_PHY_0_ADDR);
+}
+
+static inline void ricu_afe_ctrl_37_phy_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_37_PHY_0_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_5_BIT ((u32)0x00000020)
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_5_POS 5
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_4_BIT ((u32)0x00000010)
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_4_POS 4
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_3_BIT ((u32)0x00000008)
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_3_POS 3
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_2_BIT ((u32)0x00000004)
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_2_POS 2
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_1_BIT ((u32)0x00000002)
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_1_POS 1
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_0_BIT ((u32)0x00000001)
+#define RICU_AFE_CTRL_37_PHY_0_EN_DAC_0_POS 0
+
+/*
+ * @brief AFE_CTRL_37_PHY_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 05 PHY1_EN_DAC5 0
+ * 04 PHY1_EN_DAC4 0
+ * 03 PHY1_EN_DAC3 0
+ * 02 PHY1_EN_DAC2 0
+ * 01 PHY1_EN_DAC1 0
+ * 00 PHY1_EN_DAC0 0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_37_PHY_1_ADDR (REG_RICU_BASE_ADDR + 0x000000C0)
+#define RICU_AFE_CTRL_37_PHY_1_OFFSET 0x000000C0
+#define RICU_AFE_CTRL_37_PHY_1_INDEX 0x00000030
+#define RICU_AFE_CTRL_37_PHY_1_RESET 0x00000000
+
+static inline u32 ricu_afe_ctrl_37_phy_1_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTRL_37_PHY_1_ADDR);
+}
+
+static inline void ricu_afe_ctrl_37_phy_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_37_PHY_1_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_5_BIT ((u32)0x00000020)
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_5_POS 5
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_4_BIT ((u32)0x00000010)
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_4_POS 4
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_3_BIT ((u32)0x00000008)
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_3_POS 3
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_2_BIT ((u32)0x00000004)
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_2_POS 2
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_1_BIT ((u32)0x00000002)
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_1_POS 1
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_0_BIT ((u32)0x00000001)
+#define RICU_AFE_CTRL_37_PHY_1_EN_DAC_0_POS 0
+
+/*
+ * @brief AFE_CTRL_39 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL4 0
+ * 14:08 RO_CTRLQ4 0x7
+ * 06:00 RO_CTRLI4 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTRL_39_ADDR (REG_RICU_BASE_ADDR + 0x000000CC)
+#define RICU_AFE_CTRL_39_OFFSET 0x000000CC
+#define RICU_AFE_CTRL_39_INDEX 0x00000033
+#define RICU_AFE_CTRL_39_RESET 0x00000707
+
+static inline void ricu_afe_ctrl_39_pack(struct cl_chip *chip, u8 rosel4, u8 roctrlq4, u8 roctrli4)
+{
+ ASSERT_ERR_CHIP((((u32)rosel4 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq4 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli4 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_39_ADDR,
+ ((u32)rosel4 << 16) | ((u32)roctrlq4 << 8) | ((u32)roctrli4 << 0));
+}
+
+/*
+ * @brief AFE_CTRL_40 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL5 0
+ * 14:08 RO_CTRLQ5 0x7
+ * 06:00 RO_CTRLI5 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTRL_40_ADDR (REG_RICU_BASE_ADDR + 0x000000D0)
+#define RICU_AFE_CTRL_40_OFFSET 0x000000D0
+#define RICU_AFE_CTRL_40_INDEX 0x00000034
+#define RICU_AFE_CTRL_40_RESET 0x00000707
+
+static inline void ricu_afe_ctrl_40_pack(struct cl_chip *chip, u8 rosel5, u8 roctrlq5, u8 roctrli5)
+{
+ ASSERT_ERR_CHIP((((u32)rosel5 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq5 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli5 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_40_ADDR,
+ ((u32)rosel5 << 16) | ((u32)roctrlq5 << 8) | ((u32)roctrli5 << 0));
+}
+
+/*
+ * @brief AFE_CTRL_41 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL6 0
+ * 14:08 RO_CTRLQ6 0x7
+ * 06:00 RO_CTRLI6 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTRL_41_ADDR (REG_RICU_BASE_ADDR + 0x000000D4)
+#define RICU_AFE_CTRL_41_OFFSET 0x000000D4
+#define RICU_AFE_CTRL_41_INDEX 0x00000035
+#define RICU_AFE_CTRL_41_RESET 0x00000707
+
+static inline void ricu_afe_ctrl_41_pack(struct cl_chip *chip, u8 rosel6, u8 roctrlq6, u8 roctrli6)
+{
+ ASSERT_ERR_CHIP((((u32)rosel6 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq6 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli6 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_41_ADDR,
+ ((u32)rosel6 << 16) | ((u32)roctrlq6 << 8) | ((u32)roctrli6 << 0));
+}
+
+/*
+ * @brief AFE_CTRL_42 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 16 ROSEL7 0
+ * 14:08 RO_CTRLQ7 0x7
+ * 06:00 RO_CTRLI7 0x7
+ * </pre>
+ */
+#define RICU_AFE_CTRL_42_ADDR (REG_RICU_BASE_ADDR + 0x000000D8)
+#define RICU_AFE_CTRL_42_OFFSET 0x000000D8
+#define RICU_AFE_CTRL_42_INDEX 0x00000036
+#define RICU_AFE_CTRL_42_RESET 0x00000707
+
+static inline void ricu_afe_ctrl_42_pack(struct cl_chip *chip, u8 rosel7, u8 roctrlq7, u8 roctrli7)
+{
+ ASSERT_ERR_CHIP((((u32)rosel7 << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrlq7 << 8) & ~((u32)0x00007F00)) == 0);
+ ASSERT_ERR_CHIP((((u32)roctrli7 << 0) & ~((u32)0x0000007F)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_42_ADDR,
+ ((u32)rosel7 << 16) | ((u32)roctrlq7 << 8) | ((u32)roctrli7 << 0));
+}
+
+/*
+ * @brief AFE_CTRL_43 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 01:00 FREQ_SEL 0x3
+ * </pre>
+ */
+#define RICU_AFE_CTRL_43_ADDR (REG_RICU_BASE_ADDR + 0x000000DC)
+#define RICU_AFE_CTRL_43_OFFSET 0x000000DC
+#define RICU_AFE_CTRL_43_INDEX 0x00000037
+#define RICU_AFE_CTRL_43_RESET 0x00000003
+
+static inline void ricu_afe_ctrl_43_freq_sel_setf(struct cl_chip *chip, u8 freqsel)
+{
+ ASSERT_ERR_CHIP((((u32)freqsel << 0) & ~((u32)0x00000003)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_43_ADDR, (u32)freqsel << 0);
+}
+
+/*
+ * @brief AFE_CTRL_44 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 01:00 CDB_FREQ_SEL 0x3
+ * </pre>
+ */
+#define RICU_AFE_CTRL_44_ADDR (REG_RICU_BASE_ADDR + 0x000000E0)
+#define RICU_AFE_CTRL_44_OFFSET 0x000000E0
+#define RICU_AFE_CTRL_44_INDEX 0x00000038
+#define RICU_AFE_CTRL_44_RESET 0x00000003
+
+static inline void ricu_afe_ctrl_44_cdb_freq_sel_setf(struct cl_chip *chip, u8 cdbfreqsel)
+{
+ ASSERT_ERR_CHIP((((u32)cdbfreqsel << 0) & ~((u32)0x00000003)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_44_ADDR, (u32)cdbfreqsel << 0);
+}
+
+/*
+ * @brief SPI_CLK_CTRL register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 05:00 SPI_CLK_BITMAP 0xE
+ * </pre>
+ */
+#define RICU_SPI_CLK_CTRL_ADDR (REG_RICU_BASE_ADDR + 0x000000E4)
+#define RICU_SPI_CLK_CTRL_OFFSET 0x000000E4
+#define RICU_SPI_CLK_CTRL_INDEX 0x00000039
+#define RICU_SPI_CLK_CTRL_RESET 0x0000000E
+
+static inline void ricu_spi_clk_ctrl_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_SPI_CLK_CTRL_ADDR, value);
+}
+
+/*
+ * @brief FEM_CONF_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:20 FEM5_CTL_SEL 0x5
+ * 19:16 FEM4_CTL_SEL 0x4
+ * 15:12 FEM3_CTL_SEL 0x3
+ * 11:08 FEM2_CTL_SEL 0x2
+ * 07:04 FEM1_CTL_SEL 0x1
+ * 03:00 FEM0_CTL_SEL 0x0
+ * </pre>
+ */
+#define RICU_FEM_CONF_0_ADDR (REG_RICU_BASE_ADDR + 0x000000F0)
+#define RICU_FEM_CONF_0_OFFSET 0x000000F0
+#define RICU_FEM_CONF_0_INDEX 0x0000003C
+#define RICU_FEM_CONF_0_RESET 0x00543210
+
+static inline void ricu_fem_conf_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_FEM_CONF_0_ADDR, value);
+}
+
+/*
+ * @brief FEM_CONF_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:20 FEM11_CTL_SEL 0xd
+ * 19:16 FEM10_CTL_SEL 0xc
+ * 15:12 FEM9_CTL_SEL 0xb
+ * 11:08 FEM8_CTL_SEL 0xa
+ * 07:04 FEM7_CTL_SEL 0x9
+ * 03:00 FEM6_CTL_SEL 0x8
+ * </pre>
+ */
+#define RICU_FEM_CONF_1_ADDR (REG_RICU_BASE_ADDR + 0x000000F4)
+#define RICU_FEM_CONF_1_OFFSET 0x000000F4
+#define RICU_FEM_CONF_1_INDEX 0x0000003D
+#define RICU_FEM_CONF_1_RESET 0x00DCBA98
+
+static inline void ricu_fem_conf_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_FEM_CONF_1_ADDR, value);
+}
+
+/*
+ * @brief AFE_CTRL_36_PHY_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 07 PHY1_ADC_ALWAYS_EN_LD_IR 0
+ * 06 PHY1_ADC_ALWAYS_EN_LD_AVDQ 0
+ * 05 PHY1_ADC_ALWAYS_EN_LD_AVDI 0
+ * 04 PHY1_ADC_ALWAYS_EN_ADCQ 0
+ * 03 PHY1_ADC_ALWAYS_EN_ADCI 0
+ * 01 PHY1_HW_MODE_DAC 0
+ * 00 PHY1_HW_MODE_ADC 0
+ * </pre>
+ */
+#define RICU_AFE_CTRL_36_PHY_1_ADDR (REG_RICU_BASE_ADDR + 0x000000F8)
+#define RICU_AFE_CTRL_36_PHY_1_OFFSET 0x000000F8
+#define RICU_AFE_CTRL_36_PHY_1_INDEX 0x0000003E
+#define RICU_AFE_CTRL_36_PHY_1_RESET 0x00000000
+
+static inline u32 ricu_afe_ctrl_36_phy_1_get(struct cl_chip *chip)
+{
+ return cl_reg_read_chip(chip, RICU_AFE_CTRL_36_PHY_1_ADDR);
+}
+
+static inline void ricu_afe_ctrl_36_phy_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_36_PHY_1_ADDR, value);
+}
+
+/* Field definitions */
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_LD_IR_BIT ((u32)0x00000080)
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_LD_IR_POS 7
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_LD_AVDQ_BIT ((u32)0x00000040)
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_LD_AVDQ_POS 6
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_LD_AVDI_BIT ((u32)0x00000020)
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_LD_AVDI_POS 5
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_ADCQ_BIT ((u32)0x00000010)
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_ADCQ_POS 4
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_ADCI_BIT ((u32)0x00000008)
+#define RICU_AFE_CTRL_36_PHY_1_ADC_ALWAYS_EN_ADCI_POS 3
+#define RICU_AFE_CTRL_36_PHY_1_HW_MODE_DAC_BIT ((u32)0x00000002)
+#define RICU_AFE_CTRL_36_PHY_1_HW_MODE_DAC_POS 1
+#define RICU_AFE_CTRL_36_PHY_1_HW_MODE_ADC_BIT ((u32)0x00000001)
+#define RICU_AFE_CTRL_36_PHY_1_HW_MODE_ADC_POS 0
+
+static inline void ricu_afe_ctrl_36_phy_1_hw_mode_dac_setf(struct cl_chip *chip, u8 phy1hwmodedac)
+{
+ ASSERT_ERR_CHIP((((u32)phy1hwmodedac << 1) & ~((u32)0x00000002)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_36_PHY_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_36_PHY_1_ADDR) &
+ ~((u32)0x00000002)) | ((u32)phy1hwmodedac << 1));
+}
+
+static inline void ricu_afe_ctrl_36_phy_1_hw_mode_adc_setf(struct cl_chip *chip, u8 phy1hwmodeadc)
+{
+ ASSERT_ERR_CHIP((((u32)phy1hwmodeadc << 0) & ~((u32)0x00000001)) == 0);
+ cl_reg_write_chip(chip, RICU_AFE_CTRL_36_PHY_1_ADDR,
+ (cl_reg_read_chip(chip, RICU_AFE_CTRL_36_PHY_1_ADDR) &
+ ~((u32)0x00000001)) | ((u32)phy1hwmodeadc << 0));
+}
+
+/*
+ * @brief AFE_ADC_CH_ALLOC register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 07:00 AFE_ADC_CH_ALLOC 0xFF
+ * </pre>
+ */
+#define RICU_AFE_ADC_CH_ALLOC_ADDR (REG_RICU_BASE_ADDR + 0x000000FC)
+#define RICU_AFE_ADC_CH_ALLOC_OFFSET 0x000000FC
+#define RICU_AFE_ADC_CH_ALLOC_INDEX 0x0000003F
+#define RICU_AFE_ADC_CH_ALLOC_RESET 0x000000FF
+
+static inline u8 ricu_afe_adc_ch_alloc_afe_adc_ch_alloc_getf(struct cl_chip *chip)
+{
+ u32 local_val = cl_reg_read_chip(chip, RICU_AFE_ADC_CH_ALLOC_ADDR);
+
+ return (local_val >> 0);
+}
+
+static inline void ricu_afe_adc_ch_alloc_afe_adc_ch_alloc_setf(struct cl_chip *chip,
+ u8 afeadcchalloc)
+{
+ cl_reg_write_chip(chip, RICU_AFE_ADC_CH_ALLOC_ADDR, (u32)afeadcchalloc << 0);
+}
+
+#define RIU_RSF_FILE_SIZE 0x60C
+
+/*
+ * @brief RSF_CONTROL register definition
+ * resampling filter operation mode register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 rsf_init_en 1
+ * 07 rsf_tx_bypass_type 0
+ * 06 rsf_tx_bypass_mode 1
+ * 05 rsf_rx_bypass_type 0
+ * 04 rsf_rx_bypass_mode 1
+ * 01 rsf_rx_ctl_from_reg 1
+ * </pre>
+ */
+#define RIU_RSF_CONTROL_ADDR (REG_RIU_BASE_ADDR + 0x000001A8)
+#define RIU_RSF_CONTROL_OFFSET 0x000001A8
+#define RIU_RSF_CONTROL_INDEX 0x0000006A
+#define RIU_RSF_CONTROL_RESET 0x80000053
+
+static inline void riu_rsf_control_rsf_init_en_setf(struct cl_hw *cl_hw, u8 rsfiniten)
+{
+ cl_reg_write(cl_hw, RIU_RSF_CONTROL_ADDR,
+ (cl_reg_read(cl_hw, RIU_RSF_CONTROL_ADDR) & ~((u32)0x80000000)) |
+ ((u32)rsfiniten << 31));
+}
+
+/*
+ * @brief RSF_INIT register definition
+ * resampling filter initialization data register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 RSF_INIT_DATA 0x0
+ * </pre>
+ */
+#define RIU_RSF_INIT_ADDR (REG_RIU_BASE_ADDR + 0x000001AC)
+#define RIU_RSF_INIT_OFFSET 0x000001AC
+#define RIU_RSF_INIT_INDEX 0x0000006B
+#define RIU_RSF_INIT_RESET 0x00000000
+
+static inline void riu_rsf_init_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, RIU_RSF_INIT_ADDR, value);
+}
+
+/*
+ * @brief AGCFSM_RAM_INIT_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 AGC_FSM_RAM_INIT_EN 0
+ * 29 AGC_FSM_RAM_INIT_AINC2 0
+ * 28 AGC_FSM_RAM_INIT_AINC1 0
+ * 12 AGC_FSM_RAM_INIT_WPTR_SET 0
+ * 10:00 AGC_FSM_RAM_INIT_WPTR 0x0
+ * </pre>
+ */
+#define RIU_AGCFSM_RAM_INIT_1_ADDR (REG_RIU_BASE_ADDR + 0x000001B0)
+#define RIU_AGCFSM_RAM_INIT_1_OFFSET 0x000001B0
+#define RIU_AGCFSM_RAM_INIT_1_INDEX 0x0000006C
+#define RIU_AGCFSM_RAM_INIT_1_RESET 0x00000000
+
+static inline void riu_agcfsm_ram_init_1_agc_fsm_ram_init_en_setf(struct cl_hw *cl_hw,
+ u8 agcfsmraminiten)
+{
+ cl_reg_write(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR,
+ (cl_reg_read(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR) & ~((u32)0x80000000)) |
+ ((u32)agcfsmraminiten << 31));
+}
+
+static inline void riu_agcfsm_ram_init_1_agc_fsm_ram_init_ainc_1_setf(struct cl_hw *cl_hw,
+ u8 agcfsmraminitainc1)
+{
+ ASSERT_ERR((((u32)agcfsmraminitainc1 << 28) & ~((u32)0x10000000)) == 0);
+ cl_reg_write(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR,
+ (cl_reg_read(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR) & ~((u32)0x10000000)) |
+ ((u32)agcfsmraminitainc1 << 28));
+}
+
+static inline void riu_agcfsm_ram_init_1_agc_fsm_ram_init_wptr_set_setf(struct cl_hw *cl_hw,
+ u8 agcfsmraminitwptrset)
+{
+ ASSERT_ERR((((u32)agcfsmraminitwptrset << 12) & ~((u32)0x00001000)) == 0);
+ cl_reg_write(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR,
+ (cl_reg_read(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR) & ~((u32)0x00001000)) |
+ ((u32)agcfsmraminitwptrset << 12));
+}
+
+static inline void riu_agcfsm_ram_init_1_agc_fsm_ram_init_wptr_setf(struct cl_hw *cl_hw,
+ u16 agcfsmraminitwptr)
+{
+ ASSERT_ERR((((u32)agcfsmraminitwptr << 0) & ~((u32)0x000007FF)) == 0);
+ cl_reg_write(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR,
+ (cl_reg_read(cl_hw, RIU_AGCFSM_RAM_INIT_1_ADDR) & ~((u32)0x000007FF)) |
+ ((u32)agcfsmraminitwptr << 0));
+}
+
+/*
+ * @brief AGCFSM_RAM_INIT_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:00 AGC_FSM_RAM_INIT_WDATA 0x0
+ * </pre>
+ */
+#define RIU_AGCFSM_RAM_INIT_2_ADDR (REG_RIU_BASE_ADDR + 0x000001B4)
+#define RIU_AGCFSM_RAM_INIT_2_OFFSET 0x000001B4
+#define RIU_AGCFSM_RAM_INIT_2_INDEX 0x0000006D
+#define RIU_AGCFSM_RAM_INIT_2_RESET 0x00000000
+
+static inline void riu_agcfsm_ram_init_2_set(struct cl_hw *cl_hw, u32 value)
+{
+ cl_reg_write(cl_hw, RIU_AGCFSM_RAM_INIT_2_ADDR, value);
+}
+
+/*
+ * @brief AGCINBDPOW_20_PNOISESTAT register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOW20_PNOISEDBM3 0x0
+ * 23:16 INBDPOW20_PNOISEDBM2 0x0
+ * 15:08 INBDPOW20_PNOISEDBM1 0x0
+ * 07:00 INBDPOW20_PNOISEDBM0 0x0
+ * </pre>
+ */
+#define RIU_AGCINBDPOW_20_PNOISESTAT_ADDR (REG_RIU_BASE_ADDR + 0x00000228)
+#define RIU_AGCINBDPOW_20_PNOISESTAT_OFFSET 0x00000228
+#define RIU_AGCINBDPOW_20_PNOISESTAT_INDEX 0x0000008A
+#define RIU_AGCINBDPOW_20_PNOISESTAT_RESET 0x00000000
+
+static inline u32 riu_agcinbdpow_20_pnoisestat_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_AGCINBDPOW_20_PNOISESTAT_ADDR);
+}
+
+/*
+ * @brief AGCINBDPOWSECNOISESTAT register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:16 INBDPOW80_SNOISEDBM 0x0
+ * 15:08 INBDPOW40_SNOISEDBM 0x0
+ * 07:00 INBDPOW20_SNOISEDBM 0x0
+ * </pre>
+ */
+#define RIU_AGCINBDPOWSECNOISESTAT_ADDR (REG_RIU_BASE_ADDR + 0x00000230)
+#define RIU_AGCINBDPOWSECNOISESTAT_OFFSET 0x00000230
+#define RIU_AGCINBDPOWSECNOISESTAT_INDEX 0x0000008C
+#define RIU_AGCINBDPOWSECNOISESTAT_RESET 0x00000000
+
+static inline u32 riu_agcinbdpowsecnoisestat_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_AGCINBDPOWSECNOISESTAT_ADDR);
+}
+
+/*
+ * @brief RWNXAGCCNTL register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:26 COMBPATHSEL 0x3F
+ * 25:20 GAINKEEP 0x0
+ * 16 HTSTFGAINEN 1
+ * 15 NOISE_CAPTURE_DELAY_MODE 0
+ * 14 EST_PATH_SEL_2 0
+ * 13 CCA_MDM_ST_CLEAR 0
+ * 12 AGCFSMRESET 0
+ * 11 RADARDETEN 0
+ * 10 RIFSDETEN 1
+ * 09 DSSSONLY 0
+ * 08 OFDMONLY 0
+ * 07:04 GPSTATUS 0x0
+ * 03 EST_PATH_SEL 0
+ * 01 ADC_SEL_RADAR_DETECTOR 0
+ * 00 ADC_SEL_COMP_MODULE 0
+ * </pre>
+ */
+#define RIU_RWNXAGCCNTL_ADDR (REG_RIU_BASE_ADDR + 0x00000390)
+#define RIU_RWNXAGCCNTL_OFFSET 0x00000390
+#define RIU_RWNXAGCCNTL_INDEX 0x000000E4
+#define RIU_RWNXAGCCNTL_RESET 0xFC010400
+
+static inline void riu_rwnxagccntl_agcfsmreset_setf(struct cl_hw *cl_hw, u8 agcfsmreset)
+{
+ ASSERT_ERR((((u32)agcfsmreset << 12) & ~((u32)0x00001000)) == 0);
+ cl_reg_write(cl_hw, RIU_RWNXAGCCNTL_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCCNTL_ADDR) & ~((u32)0x00001000)) |
+ ((u32)agcfsmreset << 12));
+}
+
+/*
+ * @brief RWNXAGCDSP_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 27:20 INBDPOWMINDBV 0xBF
+ * 17:16 INBDRND 0x3
+ * 15:08 INBDPOWMINDBM_ANT1 0x9C
+ * 07:00 INBDPOWMINDBM_ANT0 0x9C
+ * </pre>
+ */
+#define RIU_RWNXAGCDSP_3_ADDR (REG_RIU_BASE_ADDR + 0x000003A0)
+#define RIU_RWNXAGCDSP_3_OFFSET 0x000003A0
+#define RIU_RWNXAGCDSP_3_INDEX 0x000000E8
+#define RIU_RWNXAGCDSP_3_RESET 0x0BF39C9C
+
+static inline u8 riu_rwnxagcdsp_3_inbdpowmindbm_ant_1_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RWNXAGCDSP_3_ADDR);
+
+ return ((local_val & ((u32)0x0000FF00)) >> 8);
+}
+
+static inline void riu_rwnxagcdsp_3_inbdpowmindbm_ant_1_setf(struct cl_hw *cl_hw,
+ u8 inbdpowmindbmant1)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCDSP_3_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCDSP_3_ADDR) & ~((u32)0x0000FF00)) |
+ ((u32)inbdpowmindbmant1 << 8));
+}
+
+static inline u8 riu_rwnxagcdsp_3_inbdpowmindbm_ant_0_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RWNXAGCDSP_3_ADDR);
+
+ return ((local_val & ((u32)0x000000FF)) >> 0);
+}
+
+static inline void riu_rwnxagcdsp_3_inbdpowmindbm_ant_0_setf(struct cl_hw *cl_hw,
+ u8 inbdpowmindbmant0)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCDSP_3_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCDSP_3_ADDR) & ~((u32)0x000000FF)) |
+ ((u32)inbdpowmindbmant0 << 0));
+}
+
+/*
+ * @brief RWNXAGCCCA_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 CCA_CNT_CLEAR 0
+ * 30:29 CCA_CNT_RATE 0x0
+ * 28:20 INBDCCAPOWMINDBM 0x1B5
+ * 19:12 CCAFALLTHRDBM 0xBF
+ * 10 CCAEnergy_Reset_Type 0
+ * 09 DISCCAEN 1
+ * 08 SATCCAEN 1
+ * 07:00 CCARISETHRDBM 0xC2
+ * </pre>
+ */
+#define RIU_RWNXAGCCCA_1_ADDR (REG_RIU_BASE_ADDR + 0x000003AC)
+#define RIU_RWNXAGCCCA_1_OFFSET 0x000003AC
+#define RIU_RWNXAGCCCA_1_INDEX 0x000000EB
+#define RIU_RWNXAGCCCA_1_RESET 0x1B5BF3C2
+
+static inline void riu_rwnxagccca_1_cca_cnt_clear_setf(struct cl_hw *cl_hw, u8 ccacntclear)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCCCA_1_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCCCA_1_ADDR) & ~((u32)0x80000000)) |
+ ((u32)ccacntclear << 31));
+}
+
+static inline void riu_rwnxagccca_1_cca_cnt_rate_setf(struct cl_hw *cl_hw, u8 ccacntrate)
+{
+ ASSERT_ERR((((u32)ccacntrate << 29) & ~((u32)0x60000000)) == 0);
+ cl_reg_write_direct(cl_hw, RIU_RWNXAGCCCA_1_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCCCA_1_ADDR) & ~((u32)0x60000000)) |
+ ((u32)ccacntrate << 29));
+}
+
+/*
+ * @brief RWNXAGCDSP_5 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOWMINDBM_ANT5 0x9C
+ * 23:16 INBDPOWMINDBM_ANT4 0x9C
+ * 15:08 INBDPOWMINDBM_ANT3 0x9C
+ * 07:00 INBDPOWMINDBM_ANT2 0x9C
+ * </pre>
+ */
+#define RIU_RWNXAGCDSP_5_ADDR (REG_RIU_BASE_ADDR + 0x000003EC)
+#define RIU_RWNXAGCDSP_5_OFFSET 0x000003EC
+#define RIU_RWNXAGCDSP_5_INDEX 0x000000FB
+#define RIU_RWNXAGCDSP_5_RESET 0x9C9C9C9C
+
+static inline u8 riu_rwnxagcdsp_5_inbdpowmindbm_ant_5_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR);
+
+ return ((local_val & ((u32)0xFF000000)) >> 24);
+}
+
+static inline void riu_rwnxagcdsp_5_inbdpowmindbm_ant_5_setf(struct cl_hw *cl_hw,
+ u8 inbdpowmindbmant5)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCDSP_5_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR) & ~((u32)0xFF000000)) |
+ ((u32)inbdpowmindbmant5 << 24));
+}
+
+static inline u8 riu_rwnxagcdsp_5_inbdpowmindbm_ant_4_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR);
+
+ return ((local_val & ((u32)0x00FF0000)) >> 16);
+}
+
+static inline void riu_rwnxagcdsp_5_inbdpowmindbm_ant_4_setf(struct cl_hw *cl_hw,
+ u8 inbdpowmindbmant4)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCDSP_5_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR) & ~((u32)0x00FF0000)) |
+ ((u32)inbdpowmindbmant4 << 16));
+}
+
+static inline u8 riu_rwnxagcdsp_5_inbdpowmindbm_ant_3_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR);
+
+ return ((local_val & ((u32)0x0000FF00)) >> 8);
+}
+
+static inline void riu_rwnxagcdsp_5_inbdpowmindbm_ant_3_setf(struct cl_hw *cl_hw,
+ u8 inbdpowmindbmant3)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCDSP_5_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR) & ~((u32)0x0000FF00)) |
+ ((u32)inbdpowmindbmant3 << 8));
+}
+
+static inline u8 riu_rwnxagcdsp_5_inbdpowmindbm_ant_2_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR);
+
+ return ((local_val & ((u32)0x000000FF)) >> 0);
+}
+
+static inline void riu_rwnxagcdsp_5_inbdpowmindbm_ant_2_setf(struct cl_hw *cl_hw,
+ u8 inbdpowmindbmant2)
+{
+ cl_reg_write(cl_hw, RIU_RWNXAGCDSP_5_ADDR,
+ (cl_reg_read(cl_hw, RIU_RWNXAGCDSP_5_ADDR) & ~((u32)0x000000FF)) |
+ ((u32)inbdpowmindbmant2 << 0));
+}
+
+/*
+ * @brief AGCINBDPOWNOISEPER_20_STAT_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOWNOISEDBMPER20_3 0x0
+ * 23:16 INBDPOWNOISEDBMPER20_2 0x0
+ * 15:08 INBDPOWNOISEDBMPER20_1 0x0
+ * 07:00 INBDPOWNOISEDBMPER20_0 0x0
+ * </pre>
+ */
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_0_ADDR (REG_RIU_BASE_ADDR + 0x00000478)
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_0_OFFSET 0x00000478
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_0_INDEX 0x0000011E
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_0_RESET 0x00000000
+
+static inline u32 riu_agcinbdpownoiseper_20_stat_0_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_AGCINBDPOWNOISEPER_20_STAT_0_ADDR);
+}
+
+/*
+ * @brief AGCINBDPOWNOISEPER_20_STAT_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOWNOISEDBMPER20_7 0x0
+ * 23:16 INBDPOWNOISEDBMPER20_6 0x0
+ * 15:08 INBDPOWNOISEDBMPER20_5 0x0
+ * 07:00 INBDPOWNOISEDBMPER20_4 0x0
+ * </pre>
+ */
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_1_ADDR (REG_RIU_BASE_ADDR + 0x0000047C)
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_1_OFFSET 0x0000047C
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_1_INDEX 0x0000011F
+#define RIU_AGCINBDPOWNOISEPER_20_STAT_1_RESET 0x00000000
+
+static inline u32 riu_agcinbdpownoiseper_20_stat_1_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_AGCINBDPOWNOISEPER_20_STAT_1_ADDR);
+}
+
+/*
+ * @brief INBDPOWFORMAC_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOW20_PDBMA3_MAC 0x0
+ * 23:16 INBDPOW20_PDBMA2_MAC 0x0
+ * 15:08 INBDPOW20_PDBMA1_MAC 0x0
+ * 07:00 INBDPOW20_PDBMA0_MAC 0x0
+ * </pre>
+ */
+#define RIU_INBDPOWFORMAC_0_ADDR (REG_RIU_BASE_ADDR + 0x00000480)
+#define RIU_INBDPOWFORMAC_0_OFFSET 0x00000480
+#define RIU_INBDPOWFORMAC_0_INDEX 0x00000120
+#define RIU_INBDPOWFORMAC_0_RESET 0x00000000
+
+static inline u32 riu_inbdpowformac_0_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_INBDPOWFORMAC_0_ADDR);
+}
+
+/*
+ * @brief INBDPOWFORMAC_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 15:08 INBDPOW20_PDBMA5_MAC 0x0
+ * 07:00 INBDPOW20_PDBMA4_MAC 0x0
+ * </pre>
+ */
+#define RIU_INBDPOWFORMAC_1_ADDR (REG_RIU_BASE_ADDR + 0x00000484)
+#define RIU_INBDPOWFORMAC_1_OFFSET 0x00000484
+#define RIU_INBDPOWFORMAC_1_INDEX 0x00000121
+#define RIU_INBDPOWFORMAC_1_RESET 0x00000000
+
+static inline u32 riu_inbdpowformac_1_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_INBDPOWFORMAC_1_ADDR);
+}
+
+/*
+ * @brief INBDPOWFORMAC_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 23:16 INBDPOW80_SDBM_MAC 0x0
+ * 15:08 INBDPOW40_SDBM_MAC 0x0
+ * 07:00 INBDPOW20_SDBM_MAC 0x0
+ * </pre>
+ */
+#define RIU_INBDPOWFORMAC_2_ADDR (REG_RIU_BASE_ADDR + 0x00000488)
+#define RIU_INBDPOWFORMAC_2_OFFSET 0x00000488
+#define RIU_INBDPOWFORMAC_2_INDEX 0x00000122
+#define RIU_INBDPOWFORMAC_2_RESET 0x00000000
+
+static inline u32 riu_inbdpowformac_2_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_INBDPOWFORMAC_2_ADDR);
+}
+
+/*
+ * @brief INBDPOWFORMAC_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOWPER20_PDBM_3_MAC 0x0
+ * 23:16 INBDPOWPER20_PDBM_2_MAC 0x0
+ * 15:08 INBDPOWPER20_PDBM_1_MAC 0x0
+ * 07:00 INBDPOWPER20_PDBM_0_MAC 0x0
+ * </pre>
+ */
+#define RIU_INBDPOWFORMAC_3_ADDR (REG_RIU_BASE_ADDR + 0x0000048C)
+#define RIU_INBDPOWFORMAC_3_OFFSET 0x0000048C
+#define RIU_INBDPOWFORMAC_3_INDEX 0x00000123
+#define RIU_INBDPOWFORMAC_3_RESET 0x00000000
+
+static inline u32 riu_inbdpowformac_3_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_INBDPOWFORMAC_3_ADDR);
+}
+
+/*
+ * @brief INBDPOWFORMAC_4 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOWPER20_PDBM_7_MAC 0x0
+ * 23:16 INBDPOWPER20_PDBM_6_MAC 0x0
+ * 15:08 INBDPOWPER20_PDBM_5_MAC 0x0
+ * 07:00 INBDPOWPER20_PDBM_4_MAC 0x0
+ * </pre>
+ */
+#define RIU_INBDPOWFORMAC_4_ADDR (REG_RIU_BASE_ADDR + 0x00000490)
+#define RIU_INBDPOWFORMAC_4_OFFSET 0x00000490
+#define RIU_INBDPOWFORMAC_4_INDEX 0x00000124
+#define RIU_INBDPOWFORMAC_4_RESET 0x00000000
+
+static inline u32 riu_inbdpowformac_4_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_INBDPOWFORMAC_4_ADDR);
+}
+
+/*
+ * @brief AGCINBDPOW_20_PNOISESTAT_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31:24 INBDPOW20_PNOISEDBM5 0x0
+ * 23:16 INBDPOW20_PNOISEDBM4 0x0
+ * 15:08 ADCPOWDBM5 0x0
+ * 07:00 ADCPOWDBM4 0x0
+ * </pre>
+ */
+#define RIU_AGCINBDPOW_20_PNOISESTAT_2_ADDR (REG_RIU_BASE_ADDR + 0x0000067C)
+#define RIU_AGCINBDPOW_20_PNOISESTAT_2_OFFSET 0x0000067C
+#define RIU_AGCINBDPOW_20_PNOISESTAT_2_INDEX 0x0000019F
+#define RIU_AGCINBDPOW_20_PNOISESTAT_2_RESET 0x00000000
+
+static inline u32 riu_agcinbdpow_20_pnoisestat_2_get(struct cl_hw *cl_hw)
+{
+ return cl_reg_read(cl_hw, RIU_AGCINBDPOW_20_PNOISESTAT_2_ADDR);
+}
+
+#define REG_RIU_RC_BASE_ADDR 0x00485000
+
+/*
+ * @brief SW_CTRL register definition
+ * This register provides write access to the radio SPI interface register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 31 START_DONE 0
+ * 30 MORE 0
+ * 29 FASTWR_SPD 0
+ * 28 FASTWR_FORCE 0
+ * 27 FWR_HW_ENABLE 1
+ * 26 FWR_SW_ENABLE 1
+ * 25 FWR_ENABLE 1
+ * 24 RF_RESET_B 0
+ * 23:19 PRESCALER 0x1
+ * 16 READNOTWRITE 0
+ * 14:08 ADDRESS 0x0
+ * 07:00 DATA 0x0
+ * </pre>
+ */
+#define RIU_RC_SW_CTRL_ADDR (REG_RIU_RC_BASE_ADDR + 0x00000000)
+#define RIU_RC_SW_CTRL_OFFSET 0x00000000
+#define RIU_RC_SW_CTRL_INDEX 0x00000000
+#define RIU_RC_SW_CTRL_RESET 0x0E080000
+
+static inline void riu_rc_sw_ctrl_pack(struct cl_hw *cl_hw, u8 startdone, u8 more,
+ u8 fastwrspd, u8 fastwrforce, u8 fwrhwenable,
+ u8 fwrswenable, u8 fwrenable, u8 rfresetb,
+ u8 prescaler, u8 readnotwrite, u8 address, u8 data)
+{
+ ASSERT_ERR((((u32)more << 30) & ~((u32)0x40000000)) == 0);
+ ASSERT_ERR((((u32)fastwrspd << 29) & ~((u32)0x20000000)) == 0);
+ ASSERT_ERR((((u32)fastwrforce << 28) & ~((u32)0x10000000)) == 0);
+ ASSERT_ERR((((u32)fwrhwenable << 27) & ~((u32)0x08000000)) == 0);
+ ASSERT_ERR((((u32)fwrswenable << 26) & ~((u32)0x04000000)) == 0);
+ ASSERT_ERR((((u32)fwrenable << 25) & ~((u32)0x02000000)) == 0);
+ ASSERT_ERR((((u32)rfresetb << 24) & ~((u32)0x01000000)) == 0);
+ ASSERT_ERR((((u32)prescaler << 19) & ~((u32)0x00F80000)) == 0);
+ ASSERT_ERR((((u32)readnotwrite << 16) & ~((u32)0x00010000)) == 0);
+ ASSERT_ERR((((u32)address << 8) & ~((u32)0x00007F00)) == 0);
+ cl_reg_write(cl_hw, RIU_RC_SW_CTRL_ADDR,
+ ((u32)startdone << 31) | ((u32)more << 30) |
+ ((u32)fastwrspd << 29) | ((u32)fastwrforce << 28) |
+ ((u32)fwrhwenable << 27) | ((u32)fwrswenable << 26) |
+ ((u32)fwrenable << 25) | ((u32)rfresetb << 24) |
+ ((u32)prescaler << 19) | ((u32)readnotwrite << 16) |
+ ((u32)address << 8) | ((u32)data << 0));
+}
+
+static inline u8 riu_rc_sw_ctrl_start_done_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RC_SW_CTRL_ADDR);
+
+ return ((local_val & ((u32)0x80000000)) >> 31);
+}
+
+static inline u8 riu_rc_sw_ctrl_data_getf(struct cl_hw *cl_hw)
+{
+ u32 local_val = cl_reg_read(cl_hw, RIU_RC_SW_CTRL_ADDR);
+
+ return ((local_val & ((u32)0x000000FF)) >> 0);
+}
+
+/*
+ * @brief RF_LNA_LUT register definition
+ * These registers provide control of the RF LNA assertion by decoding each possible value
+ * the AGC LNA gain setting, from minimum LNA gain to maximum LNA gain. register description
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 26:24 RFLNALUT6 0x6
+ * 22:20 RFLNALUT5 0x5
+ * 18:16 RFLNALUT4 0x4
+ * 14:12 RFLNALUT3 0x3
+ * 10:08 RFLNALUT2 0x2
+ * 06:04 RFLNALUT1 0x1
+ * 02:00 RFLNALUT0 0x0
+ * </pre>
+ */
+
+/* Field definitions */
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_6_MASK ((u32)0x07000000)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_6_LSB 24
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_6_WIDTH ((u32)0x00000003)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_5_MASK ((u32)0x00700000)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_5_LSB 20
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_5_WIDTH ((u32)0x00000003)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_4_MASK ((u32)0x00070000)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_4_LSB 16
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_4_WIDTH ((u32)0x00000003)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_3_MASK ((u32)0x00007000)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_3_LSB 12
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_3_WIDTH ((u32)0x00000003)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_2_MASK ((u32)0x00000700)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_2_LSB 8
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_2_WIDTH ((u32)0x00000003)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_1_MASK ((u32)0x00000070)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_1_LSB 4
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_1_WIDTH ((u32)0x00000003)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_0_MASK ((u32)0x00000007)
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_0_LSB 0
+#define RIU_RC_RF_LNA_LUT_RFLNALUT_0_WIDTH ((u32)0x00000003)
+
+#define REG_IO_CTRL_BASE_ADDR 0x007C7000
+
+/*
+ * @brief RX_ACTIVE_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_0_ADDR (REG_IO_CTRL_BASE_ADDR + 0x0000005C)
+#define IO_CTRL_RX_ACTIVE_0_OFFSET 0x0000005C
+#define IO_CTRL_RX_ACTIVE_0_INDEX 0x00000017
+#define IO_CTRL_RX_ACTIVE_0_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_0_ADDR, value);
+}
+
+/* Field definitions */
+#define IO_CTRL_RX_ACTIVE_0_GPIO_IN_BIT ((u32)0x00002000)
+#define IO_CTRL_RX_ACTIVE_0_GPIO_IN_POS 13
+#define IO_CTRL_RX_ACTIVE_0_GPIO_OUT_BIT ((u32)0x00001000)
+#define IO_CTRL_RX_ACTIVE_0_GPIO_OUT_POS 12
+#define IO_CTRL_RX_ACTIVE_0_GPIO_OE_BIT ((u32)0x00000800)
+#define IO_CTRL_RX_ACTIVE_0_GPIO_OE_POS 11
+#define IO_CTRL_RX_ACTIVE_0_GPIO_ENABLE_BIT ((u32)0x00000400)
+#define IO_CTRL_RX_ACTIVE_0_GPIO_ENABLE_POS 10
+#define IO_CTRL_RX_ACTIVE_0_INPUT_ENABLE_BIT ((u32)0x00000200)
+#define IO_CTRL_RX_ACTIVE_0_INPUT_ENABLE_POS 9
+#define IO_CTRL_RX_ACTIVE_0_SLEW_RATE_BIT ((u32)0x00000100)
+#define IO_CTRL_RX_ACTIVE_0_SLEW_RATE_POS 8
+#define IO_CTRL_RX_ACTIVE_0_DRIVER_PULL_STATE_MASK ((u32)0x000000C0)
+#define IO_CTRL_RX_ACTIVE_0_DRIVER_PULL_STATE_LSB 6
+#define IO_CTRL_RX_ACTIVE_0_DRIVER_PULL_STATE_WIDTH ((u32)0x00000002)
+#define IO_CTRL_RX_ACTIVE_0_OUTPUT_PAD_STRENGTH_MASK ((u32)0x00000030)
+#define IO_CTRL_RX_ACTIVE_0_OUTPUT_PAD_STRENGTH_LSB 4
+#define IO_CTRL_RX_ACTIVE_0_OUTPUT_PAD_STRENGTH_WIDTH ((u32)0x00000002)
+#define IO_CTRL_RX_ACTIVE_0_SCHMIT_TRIGER_BIT ((u32)0x00000008)
+#define IO_CTRL_RX_ACTIVE_0_SCHMIT_TRIGER_POS 3
+#define IO_CTRL_RX_ACTIVE_0_MUX_SELECT_MASK ((u32)0x00000007)
+#define IO_CTRL_RX_ACTIVE_0_MUX_SELECT_LSB 0
+#define IO_CTRL_RX_ACTIVE_0_MUX_SELECT_WIDTH ((u32)0x00000003)
+
+static inline void io_ctrl_rx_active_0_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_0_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_0_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_1_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000060)
+#define IO_CTRL_RX_ACTIVE_1_OFFSET 0x00000060
+#define IO_CTRL_RX_ACTIVE_1_INDEX 0x00000018
+#define IO_CTRL_RX_ACTIVE_1_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_1_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_1_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_1_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_1_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_2_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000064)
+#define IO_CTRL_RX_ACTIVE_2_OFFSET 0x00000064
+#define IO_CTRL_RX_ACTIVE_2_INDEX 0x00000019
+#define IO_CTRL_RX_ACTIVE_2_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_2_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_2_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_2_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_2_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_2_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_3_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000068)
+#define IO_CTRL_RX_ACTIVE_3_OFFSET 0x00000068
+#define IO_CTRL_RX_ACTIVE_3_INDEX 0x0000001A
+#define IO_CTRL_RX_ACTIVE_3_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_3_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_3_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_3_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_3_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_3_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_4 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_4_ADDR (REG_IO_CTRL_BASE_ADDR + 0x0000006C)
+#define IO_CTRL_RX_ACTIVE_4_OFFSET 0x0000006C
+#define IO_CTRL_RX_ACTIVE_4_INDEX 0x0000001B
+#define IO_CTRL_RX_ACTIVE_4_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_4_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_4_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_4_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_4_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_4_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_5 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_5_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000070)
+#define IO_CTRL_RX_ACTIVE_5_OFFSET 0x00000070
+#define IO_CTRL_RX_ACTIVE_5_INDEX 0x0000001C
+#define IO_CTRL_RX_ACTIVE_5_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_5_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_5_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_5_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_5_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_5_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_6 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_6_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000074)
+#define IO_CTRL_RX_ACTIVE_6_OFFSET 0x00000074
+#define IO_CTRL_RX_ACTIVE_6_INDEX 0x0000001D
+#define IO_CTRL_RX_ACTIVE_6_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_6_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_6_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_6_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_6_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_6_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief RX_ACTIVE_7 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 1
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x3
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_RX_ACTIVE_7_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000078)
+#define IO_CTRL_RX_ACTIVE_7_OFFSET 0x00000078
+#define IO_CTRL_RX_ACTIVE_7_INDEX 0x0000001E
+#define IO_CTRL_RX_ACTIVE_7_RESET 0x000026D8
+
+static inline void io_ctrl_rx_active_7_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_7_ADDR, value);
+}
+
+static inline void io_ctrl_rx_active_7_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_RX_ACTIVE_7_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_RX_ACTIVE_7_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_0_ADDR (REG_IO_CTRL_BASE_ADDR + 0x0000007C)
+#define IO_CTRL_LNA_ENABLE_0_OFFSET 0x0000007C
+#define IO_CTRL_LNA_ENABLE_0_INDEX 0x0000001F
+#define IO_CTRL_LNA_ENABLE_0_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_0_ADDR, value);
+}
+
+/* Field definitions */
+#define IO_CTRL_LNA_ENABLE_0_GPIO_IN_BIT ((u32)0x00002000)
+#define IO_CTRL_LNA_ENABLE_0_GPIO_IN_POS 13
+#define IO_CTRL_LNA_ENABLE_0_GPIO_OUT_BIT ((u32)0x00001000)
+#define IO_CTRL_LNA_ENABLE_0_GPIO_OUT_POS 12
+#define IO_CTRL_LNA_ENABLE_0_GPIO_OE_BIT ((u32)0x00000800)
+#define IO_CTRL_LNA_ENABLE_0_GPIO_OE_POS 11
+#define IO_CTRL_LNA_ENABLE_0_GPIO_ENABLE_BIT ((u32)0x00000400)
+#define IO_CTRL_LNA_ENABLE_0_GPIO_ENABLE_POS 10
+#define IO_CTRL_LNA_ENABLE_0_INPUT_ENABLE_BIT ((u32)0x00000200)
+#define IO_CTRL_LNA_ENABLE_0_INPUT_ENABLE_POS 9
+#define IO_CTRL_LNA_ENABLE_0_SLEW_RATE_BIT ((u32)0x00000100)
+#define IO_CTRL_LNA_ENABLE_0_SLEW_RATE_POS 8
+#define IO_CTRL_LNA_ENABLE_0_DRIVER_PULL_STATE_MASK ((u32)0x000000C0)
+#define IO_CTRL_LNA_ENABLE_0_DRIVER_PULL_STATE_LSB 6
+#define IO_CTRL_LNA_ENABLE_0_DRIVER_PULL_STATE_WIDTH ((u32)0x00000002)
+#define IO_CTRL_LNA_ENABLE_0_OUTPUT_PAD_STRENGTH_MASK ((u32)0x00000030)
+#define IO_CTRL_LNA_ENABLE_0_OUTPUT_PAD_STRENGTH_LSB 4
+#define IO_CTRL_LNA_ENABLE_0_OUTPUT_PAD_STRENGTH_WIDTH ((u32)0x00000002)
+#define IO_CTRL_LNA_ENABLE_0_SCHMIT_TRIGER_BIT ((u32)0x00000008)
+#define IO_CTRL_LNA_ENABLE_0_SCHMIT_TRIGER_POS 3
+#define IO_CTRL_LNA_ENABLE_0_MUX_SELECT_MASK ((u32)0x00000007)
+#define IO_CTRL_LNA_ENABLE_0_MUX_SELECT_LSB 0
+#define IO_CTRL_LNA_ENABLE_0_MUX_SELECT_WIDTH ((u32)0x00000003)
+
+static inline void io_ctrl_lna_enable_0_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_0_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_0_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_1_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000080)
+#define IO_CTRL_LNA_ENABLE_1_OFFSET 0x00000080
+#define IO_CTRL_LNA_ENABLE_1_INDEX 0x00000020
+#define IO_CTRL_LNA_ENABLE_1_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_1_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_1_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_1_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_1_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_2_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000084)
+#define IO_CTRL_LNA_ENABLE_2_OFFSET 0x00000084
+#define IO_CTRL_LNA_ENABLE_2_INDEX 0x00000021
+#define IO_CTRL_LNA_ENABLE_2_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_2_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_2_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_2_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_2_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_2_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_3_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000088)
+#define IO_CTRL_LNA_ENABLE_3_OFFSET 0x00000088
+#define IO_CTRL_LNA_ENABLE_3_INDEX 0x00000022
+#define IO_CTRL_LNA_ENABLE_3_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_3_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_3_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_3_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_3_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_3_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_4 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_4_ADDR (REG_IO_CTRL_BASE_ADDR + 0x0000008C)
+#define IO_CTRL_LNA_ENABLE_4_OFFSET 0x0000008C
+#define IO_CTRL_LNA_ENABLE_4_INDEX 0x00000023
+#define IO_CTRL_LNA_ENABLE_4_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_4_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_4_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_4_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_4_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_4_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_5 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_5_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000090)
+#define IO_CTRL_LNA_ENABLE_5_OFFSET 0x00000090
+#define IO_CTRL_LNA_ENABLE_5_INDEX 0x00000024
+#define IO_CTRL_LNA_ENABLE_5_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_5_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_5_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_5_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_5_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_5_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_6 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_6_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000094)
+#define IO_CTRL_LNA_ENABLE_6_OFFSET 0x00000094
+#define IO_CTRL_LNA_ENABLE_6_INDEX 0x00000025
+#define IO_CTRL_LNA_ENABLE_6_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_6_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_6_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_6_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_6_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_6_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_7 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_7_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000098)
+#define IO_CTRL_LNA_ENABLE_7_OFFSET 0x00000098
+#define IO_CTRL_LNA_ENABLE_7_INDEX 0x00000026
+#define IO_CTRL_LNA_ENABLE_7_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_7_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_7_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_7_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_7_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_7_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_8 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_8_ADDR (REG_IO_CTRL_BASE_ADDR + 0x0000009C)
+#define IO_CTRL_LNA_ENABLE_8_OFFSET 0x0000009C
+#define IO_CTRL_LNA_ENABLE_8_INDEX 0x00000027
+#define IO_CTRL_LNA_ENABLE_8_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_8_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_8_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_8_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_8_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_8_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_9 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_9_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000A0)
+#define IO_CTRL_LNA_ENABLE_9_OFFSET 0x000000A0
+#define IO_CTRL_LNA_ENABLE_9_INDEX 0x00000028
+#define IO_CTRL_LNA_ENABLE_9_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_9_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_9_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_9_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_9_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_9_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_10 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_10_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000A4)
+#define IO_CTRL_LNA_ENABLE_10_OFFSET 0x000000A4
+#define IO_CTRL_LNA_ENABLE_10_INDEX 0x00000029
+#define IO_CTRL_LNA_ENABLE_10_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_10_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_10_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_10_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_10_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_10_ADDR) &
+ ~((u32)0x00000400)) | ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief LNA_ENABLE_11 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_LNA_ENABLE_11_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000A8)
+#define IO_CTRL_LNA_ENABLE_11_OFFSET 0x000000A8
+#define IO_CTRL_LNA_ENABLE_11_INDEX 0x0000002A
+#define IO_CTRL_LNA_ENABLE_11_RESET 0x00000698
+
+static inline void io_ctrl_lna_enable_11_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_11_ADDR, value);
+}
+
+static inline void io_ctrl_lna_enable_11_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_LNA_ENABLE_11_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_LNA_ENABLE_11_ADDR) &
+ ~((u32)0x00000400)) | ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_0_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000AC)
+#define IO_CTRL_PA_ENABLE_0_OFFSET 0x000000AC
+#define IO_CTRL_PA_ENABLE_0_INDEX 0x0000002B
+#define IO_CTRL_PA_ENABLE_0_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_0_ADDR, value);
+}
+
+/* Field definitions */
+#define IO_CTRL_PA_ENABLE_0_GPIO_IN_BIT ((u32)0x00002000)
+#define IO_CTRL_PA_ENABLE_0_GPIO_IN_POS 13
+#define IO_CTRL_PA_ENABLE_0_GPIO_OUT_BIT ((u32)0x00001000)
+#define IO_CTRL_PA_ENABLE_0_GPIO_OUT_POS 12
+#define IO_CTRL_PA_ENABLE_0_GPIO_OE_BIT ((u32)0x00000800)
+#define IO_CTRL_PA_ENABLE_0_GPIO_OE_POS 11
+#define IO_CTRL_PA_ENABLE_0_GPIO_ENABLE_BIT ((u32)0x00000400)
+#define IO_CTRL_PA_ENABLE_0_GPIO_ENABLE_POS 10
+#define IO_CTRL_PA_ENABLE_0_INPUT_ENABLE_BIT ((u32)0x00000200)
+#define IO_CTRL_PA_ENABLE_0_INPUT_ENABLE_POS 9
+#define IO_CTRL_PA_ENABLE_0_SLEW_RATE_BIT ((u32)0x00000100)
+#define IO_CTRL_PA_ENABLE_0_SLEW_RATE_POS 8
+#define IO_CTRL_PA_ENABLE_0_DRIVER_PULL_STATE_MASK ((u32)0x000000C0)
+#define IO_CTRL_PA_ENABLE_0_DRIVER_PULL_STATE_LSB 6
+#define IO_CTRL_PA_ENABLE_0_DRIVER_PULL_STATE_WIDTH ((u32)0x00000002)
+#define IO_CTRL_PA_ENABLE_0_OUTPUT_PAD_STRENGTH_MASK ((u32)0x00000030)
+#define IO_CTRL_PA_ENABLE_0_OUTPUT_PAD_STRENGTH_LSB 4
+#define IO_CTRL_PA_ENABLE_0_OUTPUT_PAD_STRENGTH_WIDTH ((u32)0x00000002)
+#define IO_CTRL_PA_ENABLE_0_SCHMIT_TRIGER_BIT ((u32)0x00000008)
+#define IO_CTRL_PA_ENABLE_0_SCHMIT_TRIGER_POS 3
+#define IO_CTRL_PA_ENABLE_0_MUX_SELECT_MASK ((u32)0x00000007)
+#define IO_CTRL_PA_ENABLE_0_MUX_SELECT_LSB 0
+#define IO_CTRL_PA_ENABLE_0_MUX_SELECT_WIDTH ((u32)0x00000003)
+
+static inline void io_ctrl_pa_enable_0_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_0_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_0_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_1_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000B0)
+#define IO_CTRL_PA_ENABLE_1_OFFSET 0x000000B0
+#define IO_CTRL_PA_ENABLE_1_INDEX 0x0000002C
+#define IO_CTRL_PA_ENABLE_1_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_1_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_1_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_1_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_1_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_2_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000B4)
+#define IO_CTRL_PA_ENABLE_2_OFFSET 0x000000B4
+#define IO_CTRL_PA_ENABLE_2_INDEX 0x0000002D
+#define IO_CTRL_PA_ENABLE_2_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_2_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_2_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_2_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_2_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_2_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_3_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000B8)
+#define IO_CTRL_PA_ENABLE_3_OFFSET 0x000000B8
+#define IO_CTRL_PA_ENABLE_3_INDEX 0x0000002E
+#define IO_CTRL_PA_ENABLE_3_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_3_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_3_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_3_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_3_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_3_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_4 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_4_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000BC)
+#define IO_CTRL_PA_ENABLE_4_OFFSET 0x000000BC
+#define IO_CTRL_PA_ENABLE_4_INDEX 0x0000002F
+#define IO_CTRL_PA_ENABLE_4_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_4_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_4_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_4_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_4_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_4_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_5 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_5_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000C0)
+#define IO_CTRL_PA_ENABLE_5_OFFSET 0x000000C0
+#define IO_CTRL_PA_ENABLE_5_INDEX 0x00000030
+#define IO_CTRL_PA_ENABLE_5_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_5_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_5_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_5_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_5_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_5_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_6 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_6_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000C4)
+#define IO_CTRL_PA_ENABLE_6_OFFSET 0x000000C4
+#define IO_CTRL_PA_ENABLE_6_INDEX 0x00000031
+#define IO_CTRL_PA_ENABLE_6_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_6_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_6_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_6_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_6_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_6_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_7 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_7_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000C8)
+#define IO_CTRL_PA_ENABLE_7_OFFSET 0x000000C8
+#define IO_CTRL_PA_ENABLE_7_INDEX 0x00000032
+#define IO_CTRL_PA_ENABLE_7_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_7_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_7_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_7_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_7_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_7_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_8 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_8_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000CC)
+#define IO_CTRL_PA_ENABLE_8_OFFSET 0x000000CC
+#define IO_CTRL_PA_ENABLE_8_INDEX 0x00000033
+#define IO_CTRL_PA_ENABLE_8_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_8_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_8_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_8_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_8_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_8_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_9 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_9_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000D0)
+#define IO_CTRL_PA_ENABLE_9_OFFSET 0x000000D0
+#define IO_CTRL_PA_ENABLE_9_INDEX 0x00000034
+#define IO_CTRL_PA_ENABLE_9_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_9_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_9_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_9_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_9_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_9_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_10 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_10_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000D4)
+#define IO_CTRL_PA_ENABLE_10_OFFSET 0x000000D4
+#define IO_CTRL_PA_ENABLE_10_INDEX 0x00000035
+#define IO_CTRL_PA_ENABLE_10_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_10_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_10_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_10_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_10_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_10_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief PA_ENABLE_11 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 1
+ * 09 input_enable 1
+ * 08 SLEW_RATE 0
+ * 07:06 DRIVER_PULL_STATE 0x2
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_PA_ENABLE_11_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000D8)
+#define IO_CTRL_PA_ENABLE_11_OFFSET 0x000000D8
+#define IO_CTRL_PA_ENABLE_11_INDEX 0x00000036
+#define IO_CTRL_PA_ENABLE_11_RESET 0x00000698
+
+static inline void io_ctrl_pa_enable_11_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_11_ADDR, value);
+}
+
+static inline void io_ctrl_pa_enable_11_gpio_enable_setf(struct cl_chip *chip, u8 gpioenable)
+{
+ ASSERT_ERR_CHIP((((u32)gpioenable << 10) & ~((u32)0x00000400)) == 0);
+ cl_reg_write_chip(chip, IO_CTRL_PA_ENABLE_11_ADDR,
+ (cl_reg_read_chip(chip, IO_CTRL_PA_ENABLE_11_ADDR) & ~((u32)0x00000400)) |
+ ((u32)gpioenable << 10));
+}
+
+/*
+ * @brief SPICLK register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_SPICLK_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000DC)
+#define IO_CTRL_SPICLK_OFFSET 0x000000DC
+#define IO_CTRL_SPICLK_INDEX 0x00000037
+#define IO_CTRL_SPICLK_RESET 0x00000318
+
+static inline void io_ctrl_spiclk_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_SPICLK_ADDR, value);
+}
+
+/*
+ * @brief FWR_EN_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FWR_EN_1_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000F0)
+#define IO_CTRL_FWR_EN_1_OFFSET 0x000000F0
+#define IO_CTRL_FWR_EN_1_INDEX 0x0000003C
+#define IO_CTRL_FWR_EN_1_RESET 0x00000318
+
+static inline void io_ctrl_fwr_en_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FWR_EN_1_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_7 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_7_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000F8)
+#define IO_CTRL_FASTWR_7_OFFSET 0x000000F8
+#define IO_CTRL_FASTWR_7_INDEX 0x0000003E
+#define IO_CTRL_FASTWR_7_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_7_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_7_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_6 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_6_ADDR (REG_IO_CTRL_BASE_ADDR + 0x000000FC)
+#define IO_CTRL_FASTWR_6_OFFSET 0x000000FC
+#define IO_CTRL_FASTWR_6_INDEX 0x0000003F
+#define IO_CTRL_FASTWR_6_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_6_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_6_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_5 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_5_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000100)
+#define IO_CTRL_FASTWR_5_OFFSET 0x00000100
+#define IO_CTRL_FASTWR_5_INDEX 0x00000040
+#define IO_CTRL_FASTWR_5_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_5_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_5_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_4 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_4_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000104)
+#define IO_CTRL_FASTWR_4_OFFSET 0x00000104
+#define IO_CTRL_FASTWR_4_INDEX 0x00000041
+#define IO_CTRL_FASTWR_4_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_4_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_4_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_3 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_3_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000108)
+#define IO_CTRL_FASTWR_3_OFFSET 0x00000108
+#define IO_CTRL_FASTWR_3_INDEX 0x00000042
+#define IO_CTRL_FASTWR_3_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_3_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_3_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_2 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_2_ADDR (REG_IO_CTRL_BASE_ADDR + 0x0000010C)
+#define IO_CTRL_FASTWR_2_OFFSET 0x0000010C
+#define IO_CTRL_FASTWR_2_INDEX 0x00000043
+#define IO_CTRL_FASTWR_2_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_2_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_2_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_1 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_1_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000110)
+#define IO_CTRL_FASTWR_1_OFFSET 0x00000110
+#define IO_CTRL_FASTWR_1_INDEX 0x00000044
+#define IO_CTRL_FASTWR_1_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_1_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_1_ADDR, value);
+}
+
+/*
+ * @brief FASTWR_0 register definition
+ * <pre>
+ * Bits Field Name Reset Value
+ * ----- ------------------ -----------
+ * 13 GPIO_IN 0
+ * 12 GPIO_OUT 0
+ * 11 GPIO_OE 0
+ * 10 GPIO_ENABLE 0
+ * 09 input_enable 1
+ * 08 SLEW_RATE 1
+ * 07:06 DRIVER_PULL_STATE 0x0
+ * 05:04 OUTPUT_PAD_STRENGTH 0x1
+ * 03 SCHMIT_TRIGER 1
+ * 02:00 MUX_SELECT 0x0
+ * </pre>
+ */
+#define IO_CTRL_FASTWR_0_ADDR (REG_IO_CTRL_BASE_ADDR + 0x00000114)
+#define IO_CTRL_FASTWR_0_OFFSET 0x00000114
+#define IO_CTRL_FASTWR_0_INDEX 0x00000045
+#define IO_CTRL_FASTWR_0_RESET 0x00000318
+
+static inline void io_ctrl_fastwr_0_set(struct cl_chip *chip, u32 value)
+{
+ cl_reg_write_chip(chip, IO_CTRL_FASTWR_0_ADDR, value);
+}
+
+#endif /*CL_REG_DEFS_H */
--
2.36.1


2022-05-24 16:41:53

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 42/96] cl8k: add main.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/main.c | 603 ++++++++++++++++++++++++
1 file changed, 603 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/main.c

diff --git a/drivers/net/wireless/celeno/cl8k/main.c b/drivers/net/wireless/celeno/cl8k/main.c
new file mode 100644
index 000000000000..08abb16987ef
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/main.c
@@ -0,0 +1,603 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "tx.h"
+#include "reg/reg_access.h"
+#include "reg/reg_defs.h"
+#include "stats.h"
+#include "maintenance.h"
+#include "vns.h"
+#include "traffic.h"
+#include "sounding.h"
+#include "recovery.h"
+#include "rates.h"
+#include "utils.h"
+#include "phy.h"
+#include "radio.h"
+#include "dsp.h"
+#include "dfs.h"
+#include "tcv.h"
+#include "mac_addr.h"
+#include "bf.h"
+#include "rfic.h"
+#include "e2p.h"
+#include "chip.h"
+#include "regdom.h"
+#include "platform.h"
+#include "mac80211.h"
+#include "main.h"
+
+MODULE_DESCRIPTION("Celeno 11ax driver for Linux");
+MODULE_VERSION(CONFIG_CL8K_VERSION);
+MODULE_AUTHOR("Copyright(c) 2022 Celeno Communications Ltd");
+MODULE_LICENSE("Dual BSD/GPL");
+
+static struct ieee80211_ops cl_ops = {
+ .tx = cl_ops_tx,
+ .start = cl_ops_start,
+ .stop = cl_ops_stop,
+ .add_interface = cl_ops_add_interface,
+ .remove_interface = cl_ops_remove_interface,
+ .config = cl_ops_config,
+ .bss_info_changed = cl_ops_bss_info_changed,
+ .start_ap = cl_ops_start_ap,
+ .stop_ap = cl_ops_stop_ap,
+ .prepare_multicast = cl_ops_prepare_multicast,
+ .configure_filter = cl_ops_configure_filter,
+ .set_key = cl_ops_set_key,
+ .sw_scan_start = cl_ops_sw_scan_start,
+ .sw_scan_complete = cl_ops_sw_scan_complete,
+ .sta_state = cl_ops_sta_state,
+ .sta_notify = cl_ops_sta_notify,
+ .conf_tx = cl_ops_conf_tx,
+ .sta_rc_update = cl_ops_sta_rc_update,
+ .ampdu_action = cl_ops_ampdu_action,
+ .post_channel_switch = cl_ops_post_channel_switch,
+ .flush = cl_ops_flush,
+ .tx_frames_pending = cl_ops_tx_frames_pending,
+ .reconfig_complete = cl_ops_reconfig_complete,
+ .get_txpower = cl_ops_get_txpower,
+ .set_rts_threshold = cl_ops_set_rts_threshold,
+ .event_callback = cl_ops_event_callback,
+ .set_tim = cl_ops_set_tim,
+ .get_antenna = cl_ops_get_antenna,
+ .get_expected_throughput = cl_ops_get_expected_throughput,
+ .sta_statistics = cl_ops_sta_statistics,
+ .get_survey = cl_ops_get_survey,
+ .hw_scan = cl_ops_hw_scan,
+ .cancel_hw_scan = cl_ops_cancel_hw_scan
+};
+
+static void cl_drv_workqueue_create(struct cl_hw *cl_hw)
+{
+ if (!cl_hw->drv_workqueue)
+ cl_hw->drv_workqueue = create_singlethread_workqueue("drv_workqueue");
+}
+
+static void cl_drv_workqueue_destroy(struct cl_hw *cl_hw)
+{
+ if (cl_hw->drv_workqueue) {
+ destroy_workqueue(cl_hw->drv_workqueue);
+ cl_hw->drv_workqueue = NULL;
+ }
+}
+
+static int cl_main_alloc(struct cl_hw *cl_hw)
+{
+ int ret = 0;
+
+ ret = cl_phy_data_alloc(cl_hw);
+ if (ret)
+ return ret;
+
+ ret = cl_calib_common_tables_alloc(cl_hw);
+ if (ret)
+ return ret;
+
+ ret = cl_power_table_alloc(cl_hw);
+ if (ret)
+ return ret;
+
+ return ret;
+}
+
+static void cl_main_free(struct cl_hw *cl_hw)
+{
+ cl_phy_data_free(cl_hw);
+ cl_calib_common_tables_free(cl_hw);
+ cl_power_table_free(cl_hw);
+}
+
+static void cl_free_hw(struct cl_hw *cl_hw)
+{
+ if (!cl_hw)
+ return;
+
+ cl_temperature_wait_for_measurement(cl_hw->chip, cl_hw->tcv_idx);
+
+ cl_tcv_config_free(cl_hw);
+
+ if (cl_hw->hw->wiphy->registered)
+ ieee80211_unregister_hw(cl_hw->hw);
+
+ cl_chip_unset_hw(cl_hw->chip, cl_hw);
+ ieee80211_free_hw(cl_hw->hw);
+}
+
+static void cl_free_chip(struct cl_chip *chip)
+{
+ cl_free_hw(chip->cl_hw_tcv0);
+ cl_free_hw(chip->cl_hw_tcv1);
+}
+
+static int cl_prepare_hw(struct cl_chip *chip, u8 tcv_idx,
+ const struct cl_driver_ops *drv_ops)
+{
+ struct cl_hw *cl_hw = NULL;
+ struct ieee80211_hw *hw;
+ int ret = 0;
+
+ hw = ieee80211_alloc_hw(sizeof(struct cl_hw), &cl_ops);
+ if (!hw) {
+ cl_dbg_chip_err(chip, "ieee80211_alloc_hw failed\n");
+ return -ENOMEM;
+ }
+
+ cl_hw_init(chip, hw->priv, tcv_idx);
+
+ cl_hw = hw->priv;
+ cl_hw->hw = hw;
+ cl_hw->tcv_idx = tcv_idx;
+ cl_hw->sx_idx = chip->conf->ci_tcv1_chains_sx0 ? 0 : tcv_idx;
+ cl_hw->chip = chip;
+
+ /*
+ * chip0, tcv0 --> 0
+ * chip0, tcv1 --> 1
+ * chip1, tcv0 --> 2
+ * chip1, tcv1 --> 3
+ */
+ cl_hw->idx = chip->idx * CHIP_MAX + tcv_idx;
+
+ cl_hw->drv_ops = drv_ops;
+
+ SET_IEEE80211_DEV(hw, chip->dev);
+
+ ret = cl_tcv_config_alloc(cl_hw);
+ if (ret)
+ goto out_free_hw;
+
+ ret = cl_tcv_config_read(cl_hw);
+ if (ret)
+ goto out_free_hw;
+
+ cl_chip_set_hw(chip, cl_hw);
+
+ ret = cl_mac_addr_set(cl_hw);
+ if (ret) {
+ cl_dbg_err(cl_hw, "cl_mac_addr_set failed\n");
+ goto out_free_hw;
+ }
+
+ if (cl_band_is_6g(cl_hw))
+ cl_hw->nl_band = NL80211_BAND_6GHZ;
+ else if (cl_band_is_5g(cl_hw))
+ cl_hw->nl_band = NL80211_BAND_5GHZ;
+ else
+ cl_hw->nl_band = NL80211_BAND_2GHZ;
+
+ if (cl_band_is_24g(cl_hw))
+ cl_hw->hw_mode = HW_MODE_BG;
+ else
+ cl_hw->hw_mode = HW_MODE_A;
+
+ cl_hw->wireless_mode = WIRELESS_MODE_HT_VHT_HE;
+
+ cl_cap_dyn_params(cl_hw);
+
+ cl_dbg_verbose(cl_hw, "cl_hw created\n");
+
+ return 0;
+
+out_free_hw:
+ cl_free_hw(cl_hw);
+
+ return ret;
+}
+
+void cl_main_off(struct cl_hw *cl_hw)
+{
+ cl_irq_disable(cl_hw, cl_hw->ipc_e2a_irq.all);
+ cl_ipc_stop(cl_hw);
+
+ if (!test_bit(CL_DEV_INIT, &cl_hw->drv_flags)) {
+ cl_tx_off(cl_hw);
+ cl_rx_off(cl_hw);
+ cl_msg_rx_flush_all(cl_hw);
+ }
+
+ cl_fw_file_cleanup(cl_hw);
+}
+
+static void _cl_main_deinit(struct cl_hw *cl_hw)
+{
+ if (!cl_hw)
+ return;
+
+ ieee80211_unregister_hw(cl_hw->hw);
+
+ /* Send reset message to firmware */
+ cl_msg_tx_reset(cl_hw);
+
+ cl_hw->is_stop_context = true;
+
+ cl_drv_workqueue_destroy(cl_hw);
+
+ cl_scanner_deinit(cl_hw);
+
+ cl_noise_close(cl_hw);
+ cl_maintenance_close(cl_hw);
+ cl_vns_close(cl_hw);
+ cl_rssi_assoc_exit(cl_hw);
+ cl_radar_close(cl_hw);
+ cl_sounding_close(cl_hw);
+ cl_chan_info_deinit(cl_hw);
+ cl_wrs_api_close(cl_hw);
+ cl_main_off(cl_hw);
+ /* These 2 must be called after cl_tx_off() (which is called from cl_main_off) */
+ cl_tx_amsdu_txhdr_deinit(cl_hw);
+ cl_sw_txhdr_deinit(cl_hw);
+ cl_fw_dbg_trigger_based_deinit(cl_hw);
+ cl_stats_deinit(cl_hw);
+ cl_main_free(cl_hw);
+ cl_fw_file_release(cl_hw);
+
+ cl_ipc_deinit(cl_hw);
+ cl_hw_deinit(cl_hw, cl_hw->tcv_idx);
+ vfree(cl_hw->tx_queues);
+}
+
+void cl_main_deinit(struct cl_chip *chip)
+{
+ struct cl_chip_conf *conf = chip->conf;
+ struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
+ struct cl_hw *cl_hw_tcv1 = chip->cl_hw_tcv1;
+
+ if (cl_chip_is_tcv1_enabled(chip) && cl_hw_tcv1)
+ _cl_main_deinit(cl_hw_tcv1);
+
+ if (cl_chip_is_tcv0_enabled(chip) && cl_hw_tcv0)
+ _cl_main_deinit(cl_hw_tcv0);
+
+ if (conf->ci_phy_dev != PHY_DEV_DUMMY) {
+ if (!conf->ci_phy_load_bootdrv)
+ cl_phy_off(cl_hw_tcv1);
+
+ cl_phy_off(cl_hw_tcv0);
+ }
+
+ cl_platform_dealloc(chip);
+
+ cl_free_chip(chip);
+}
+
+static struct cl_controller_reg all_controller_reg = {
+ .breset = XMAC_BRESET,
+ .debug_enable = XMAC_DEBUG_ENABLE,
+ .dreset = XMAC_DRESET,
+ .ocd_halt_on_reset = XMAC_OCD_HALT_ON_RESET,
+ .run_stall = XMAC_RUN_STALL
+};
+
+void cl_main_reset(struct cl_chip *chip, struct cl_controller_reg *controller_reg)
+{
+ /* Release TRST & BReset to enable JTAG connection to FPGA A */
+ u32 regval;
+
+ /* 1. return to reset value */
+ regval = macsys_gcu_xt_control_get(chip);
+ regval |= controller_reg->ocd_halt_on_reset;
+ regval &= ~(controller_reg->dreset | controller_reg->run_stall | controller_reg->breset);
+ macsys_gcu_xt_control_set(chip, regval);
+
+ regval = macsys_gcu_xt_control_get(chip);
+ regval |= controller_reg->dreset;
+ macsys_gcu_xt_control_set(chip, regval);
+
+ /* 2. stall xtensa & release ocd */
+ regval = macsys_gcu_xt_control_get(chip);
+ regval |= controller_reg->run_stall;
+ regval &= ~controller_reg->ocd_halt_on_reset;
+ macsys_gcu_xt_control_set(chip, regval);
+
+ /* 3. breset release & debug enable */
+ regval = macsys_gcu_xt_control_get(chip);
+ regval |= (controller_reg->debug_enable | controller_reg->breset);
+ macsys_gcu_xt_control_set(chip, regval);
+
+ msleep(100);
+}
+
+int cl_main_on(struct cl_hw *cl_hw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ int ret;
+ u32 regval;
+
+ cl_hw->fw_active = false;
+
+ cl_txq_init(cl_hw);
+
+ cl_hw_assert_info_init(cl_hw);
+
+ if (cl_recovery_in_progress(cl_hw))
+ cl_main_reset(chip, &cl_hw->controller_reg);
+
+ ret = cl_fw_file_load(cl_hw);
+ if (ret) {
+ cl_dbg_err(cl_hw, "cl_fw_file_load failed %d\n", ret);
+ return ret;
+ }
+
+ /* Clear CL_DEV_FW_ERROR after firmware loaded */
+ clear_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags);
+
+ if (cl_recovery_in_progress(cl_hw))
+ cl_ipc_recovery(cl_hw);
+
+ regval = macsys_gcu_xt_control_get(chip);
+
+ /* Set fw to run */
+ if (cl_hw->fw_active)
+ regval &= ~cl_hw->controller_reg.run_stall;
+
+ /* Set umac to run */
+ if (chip->umac_active)
+ regval &= ~UMAC_RUN_STALL;
+
+ /* Ack all possibly pending IRQs */
+ ipc_xmac_2_host_ack_set(chip, cl_hw->ipc_e2a_irq.all);
+ macsys_gcu_xt_control_set(chip, regval);
+ cl_irq_enable(cl_hw, cl_hw->ipc_e2a_irq.all);
+ /*
+ * cl_irq_status_sync will set CL_DEV_FW_SYNC when fw raises IPC_IRQ_E2A_SYNC
+ * (indicate its ready to accept interrupts)
+ */
+ ret = wait_event_interruptible_timeout(cl_hw->fw_sync_wq,
+ test_and_clear_bit(CL_DEV_FW_SYNC,
+ &cl_hw->drv_flags),
+ msecs_to_jiffies(5000));
+
+ if (ret == 0) {
+ pr_err("[%s]: FW synchronization timeout.\n", __func__);
+ cl_hw_assert_check(cl_hw);
+ ret = -ETIMEDOUT;
+ goto out_free_cached_fw;
+ } else if (ret == -ERESTARTSYS) {
+ goto out_free_cached_fw;
+ }
+
+ return 0;
+
+out_free_cached_fw:
+ cl_irq_disable(cl_hw, cl_hw->ipc_e2a_irq.all);
+ cl_fw_file_release(cl_hw);
+ return ret;
+}
+
+static int __cl_main_init(struct cl_hw *cl_hw)
+{
+ int ret;
+
+ if (!cl_hw)
+ return 0;
+
+ if (cl_regd_init(cl_hw, cl_hw->hw->wiphy))
+ cl_dbg_err(cl_hw, "regulatory failed\n");
+
+ /*
+ * ieee80211_register_hw() will take care of calling wiphy_register() and
+ * also ieee80211_if_add() (because IFTYPE_STATION is supported)
+ * which will internally call register_netdev()
+ */
+ ret = ieee80211_register_hw(cl_hw->hw);
+ if (ret) {
+ cl_dbg_err(cl_hw, "ieee80211_register_hw failed\n");
+ cl_main_deinit(cl_hw->chip);
+
+ return ret;
+ }
+
+ return ret;
+}
+
+static int _cl_main_init(struct cl_hw *cl_hw)
+{
+ int ret = 0;
+
+ if (!cl_hw)
+ return 0;
+
+ set_bit(CL_DEV_INIT, &cl_hw->drv_flags);
+
+ /* By default, set FEM mode to opertional mode. */
+ cl_hw->fem_mode = FEM_MODE_OPERETIONAL;
+
+ cl_vif_init(cl_hw);
+
+ cl_drv_workqueue_create(cl_hw);
+
+ init_waitqueue_head(&cl_hw->wait_queue);
+ init_waitqueue_head(&cl_hw->fw_sync_wq);
+ init_waitqueue_head(&cl_hw->radio_wait_queue);
+
+ mutex_init(&cl_hw->dbginfo.mutex);
+ mutex_init(&cl_hw->msg_tx_mutex);
+ mutex_init(&cl_hw->set_channel_mutex);
+
+ spin_lock_init(&cl_hw->tx_lock_agg);
+ spin_lock_init(&cl_hw->tx_lock_cfm_agg);
+ spin_lock_init(&cl_hw->tx_lock_single);
+ spin_lock_init(&cl_hw->tx_lock_bcmc);
+ spin_lock_init(&cl_hw->channel_info_lock);
+
+ ret = cl_ipc_init(cl_hw);
+ if (ret) {
+ cl_dbg_err(cl_hw, "cl_ipc_init failed %d\n", ret);
+ return ret;
+ }
+
+ cl_chip_set_rfic_version(cl_hw);
+
+ /* Validate calib params should be called after setting the rfic version */
+ cl_tcv_config_validate_calib_params(cl_hw);
+
+ cl_hw->tx_queues = vzalloc(sizeof(*cl_hw->tx_queues));
+ if (!cl_hw->tx_queues) {
+ cl_ipc_deinit(cl_hw);
+ return -ENOMEM;
+ }
+
+ ret = cl_main_on(cl_hw);
+ if (ret) {
+ cl_dbg_err(cl_hw, "cl_main_on failed %d\n", ret);
+ cl_ipc_deinit(cl_hw);
+ vfree(cl_hw->tx_queues);
+
+ return ret;
+ }
+
+ ret = cl_main_alloc(cl_hw);
+ if (ret)
+ goto out_free;
+
+ /* Reset firmware */
+ ret = cl_msg_tx_reset(cl_hw);
+ if (ret)
+ goto out_free;
+
+ cl_calib_power_read(cl_hw);
+ cl_sta_init(cl_hw);
+ cl_sw_txhdr_init(cl_hw);
+ cl_tx_amsdu_txhdr_init(cl_hw);
+ cl_rx_init(cl_hw);
+ cl_prot_mode_init(cl_hw);
+ cl_radar_init(cl_hw);
+ cl_sounding_init(cl_hw);
+ cl_traffic_init(cl_hw);
+ ret = cl_vns_init(cl_hw);
+ if (ret)
+ goto out_free;
+
+ cl_maintenance_init(cl_hw);
+ cl_rssi_assoc_init(cl_hw);
+ cl_agg_cfm_init(cl_hw);
+ cl_single_cfm_init(cl_hw);
+ cl_bcmc_cfm_init(cl_hw);
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ cl_dyn_mcast_rate_init(cl_hw);
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_rate_init(cl_hw);
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+ cl_wrs_api_init(cl_hw);
+ cl_dfs_init(cl_hw);
+ cl_noise_init(cl_hw);
+ ret = cl_fw_dbg_trigger_based_init(cl_hw);
+ if (ret)
+ goto out_free;
+
+ cl_stats_init(cl_hw);
+ cl_cca_init(cl_hw);
+ cl_bf_init(cl_hw);
+
+ ret = cl_scanner_init(cl_hw);
+ if (ret)
+ goto out_free;
+
+ /* Start firmware */
+ ret = cl_msg_tx_start(cl_hw);
+ if (ret)
+ goto out_free;
+
+ return 0;
+
+out_free:
+ cl_main_free(cl_hw);
+ vfree(cl_hw->tx_queues);
+
+ return ret;
+}
+
+int cl_main_init(struct cl_chip *chip, const struct cl_driver_ops *drv_ops)
+{
+ int ret = 0;
+ struct cl_chip_conf *conf = chip->conf;
+
+ /* All cores needs to be reset first (once per chip) */
+ cl_main_reset(chip, &all_controller_reg);
+
+ /* Prepare HW for TCV0 */
+ if (cl_chip_is_tcv0_enabled(chip)) {
+ ret = cl_prepare_hw(chip, TCV0, drv_ops);
+
+ if (ret) {
+ cl_dbg_chip_err(chip, "Prepare HW for TCV0 failed %d\n", ret);
+ return ret;
+ }
+ }
+
+ /* Prepare HW for TCV1 */
+ if (cl_chip_is_tcv1_enabled(chip)) {
+ ret = cl_prepare_hw(chip, TCV1, drv_ops);
+
+ if (ret) {
+ cl_dbg_chip_err(chip, "Prepare HW for TCV1 failed %d\n", ret);
+ cl_free_hw(chip->cl_hw_tcv0);
+ return ret;
+ }
+ }
+
+ if (!conf->ci_phy_load_bootdrv &&
+ conf->ci_phy_dev != PHY_DEV_DUMMY) {
+ ret = cl_radio_boot(chip);
+ if (ret) {
+ cl_dbg_chip_err(chip, "RF boot failed %d\n", ret);
+ return ret;
+ }
+
+ ret = cl_dsp_load_regular(chip);
+ if (ret) {
+ cl_dbg_chip_err(chip, "DSP load failed %d\n", ret);
+ return ret;
+ }
+ }
+
+ ret = _cl_main_init(chip->cl_hw_tcv0);
+ if (ret) {
+ cl_free_chip(chip);
+ return ret;
+ }
+
+ ret = _cl_main_init(chip->cl_hw_tcv1);
+ if (ret) {
+ _cl_main_deinit(chip->cl_hw_tcv0);
+ cl_free_chip(chip);
+ return ret;
+ }
+
+ ret = __cl_main_init(chip->cl_hw_tcv0);
+ if (ret)
+ return ret;
+
+ ret = __cl_main_init(chip->cl_hw_tcv1);
+ if (ret)
+ return ret;
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ if (conf->ci_calib_eeprom_en && conf->ce_production_mode && conf->ce_calib_runtime_en)
+ cl_e2p_read_eeprom_start_work(chip);
+#endif
+
+ return ret;
+}
--
2.36.1


2022-05-24 16:55:30

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 84/96] cl8k: add tx.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/tx.c | 3397 +++++++++++++++++++++++++
1 file changed, 3397 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/tx.c

diff --git a/drivers/net/wireless/celeno/cl8k/tx.c b/drivers/net/wireless/celeno/cl8k/tx.c
new file mode 100644
index 000000000000..52a8d558f3f2
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/tx.c
@@ -0,0 +1,3397 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "mac_addr.h"
+#include "sta.h"
+#include "hw.h"
+#include "utils.h"
+#include "fw.h"
+#include "key.h"
+#include "dfs.h"
+#include "radio.h"
+#include "enhanced_tim.h"
+#include "rates.h"
+#include "stats.h"
+#include "debug.h"
+#include "tx.h"
+
+/* Expected Acknowledgment */
+#define EXPECTED_NO_ACK 0
+#define EXPECTED_ACK 1
+
+void cl_tx_update_hist_tstamp(struct cl_tx_queue *tx_queue, struct sk_buff *skb,
+ u32 tstamp_hist[], bool update_skb_ktime)
+{
+ s64 diff_ms;
+ ktime_t ktime = ktime_get();
+
+ diff_ms = ktime_ms_delta(ktime, skb->tstamp);
+
+ if (diff_ms >= DELAY_HIST_SIZE)
+ diff_ms = DELAY_HIST_SIZE - 1;
+
+ tstamp_hist[diff_ms]++;
+
+ if (update_skb_ktime)
+ skb->tstamp = ktime;
+}
+
+static void cl_tx_cpu_single(struct cl_hw *cl_hw)
+{
+ u32 processor_id = smp_processor_id();
+
+ if (processor_id < CPU_MAX_NUM)
+ cl_hw->cpu_cntr.tx_single[processor_id]++;
+}
+
+static void cl_tx_cpu_agg(struct cl_hw *cl_hw)
+{
+ u32 processor_id = smp_processor_id();
+
+ if (processor_id < CPU_MAX_NUM)
+ cl_hw->cpu_cntr.tx_agg[processor_id]++;
+}
+
+static char cl_tx_ctrl_single_frame_type(__le16 fc)
+{
+ if (ieee80211_is_data_qos(fc))
+ return CL_TX_SINGLE_FRAME_TYPE_QOS_DATA;
+ else if (ieee80211_is_qos_nullfunc(fc))
+ return CL_TX_SINGLE_FRAME_TYPE_QOS_NULL;
+ else if (ieee80211_is_mgmt(fc))
+ return CL_TX_SINGLE_FRAME_TYPE_MANAGEMENT;
+ else
+ return CL_TX_SINGLE_FRAME_TYPE_OTHER;
+}
+
+static void cl_tx_single_prep(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr,
+ u16 frame_len, u8 hdr_pads, bool is_vns)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+ struct ieee80211_key_conf *key_conf = tx_info->control.hw_key;
+ struct txdesc *txdesc = &sw_txhdr->txdesc;
+ struct tx_host_info *host_info = &txdesc->host_info;
+ struct cl_vif *cl_vif = NULL;
+
+ /* Reset txdesc */
+ memset(txdesc, 0, sizeof(struct txdesc));
+
+ /* Vif_index must be filled in even without header conversion */
+ cl_vif = (struct cl_vif *)tx_info->control.vif->drv_priv;
+ host_info->vif_index = cl_vif->vif_index;
+
+ if (hdr_pads)
+ host_info->host_padding |= BIT(0);
+
+ host_info->is_bcn = sw_txhdr->is_bcn;
+ host_info->expected_ack = (tx_info->flags & IEEE80211_TX_CTL_NO_ACK) ?
+ EXPECTED_NO_ACK : EXPECTED_ACK;
+
+ /* Beware when prot and sta is unknown */
+ if (key_conf) {
+ frame_len += key_conf->icv_len;
+ host_info->is_protected = true;
+ host_info->hw_key_idx = key_conf->hw_key_idx;
+ }
+
+ host_info->packet_cnt = 1;
+
+ txdesc->umacdesc.packet_len[0] = cpu_to_le16(frame_len);
+ txdesc->e2w_txhdr_param.frame_ctrl = sw_txhdr->fc;
+ txdesc->e2w_result.bcmc = (sw_txhdr->sta_idx == STA_IDX_INVALID);
+ txdesc->e2w_result.tid = sw_txhdr->tid;
+ txdesc->e2w_result.is_vns = is_vns;
+ txdesc->e2w_result.is_txinject = false;
+ txdesc->e2w_result.single_type = cl_tx_ctrl_single_frame_type(sw_txhdr->fc);
+ txdesc->e2w_result.single_valid_sta__agg_e2w_tx_done = !!sw_txhdr->cl_sta;
+ txdesc->e2w_natt_param.sta_index = sw_txhdr->sta_idx;
+
+ /* Set rate control */
+ cl_rate_ctrl_update_desc_single(cl_hw, host_info, sw_txhdr);
+}
+
+static void cl_tx_sub_frame_set(struct cl_sta *cl_sta, u8 tid)
+{
+ struct cl_tx_queue *tx_queue = cl_sta->agg_tx_queues[tid];
+
+ if (tx_queue)
+ tx_queue->total_packets++;
+}
+
+static void cl_tx_send(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr,
+ struct cl_amsdu_ctrl *amsdu_anchor)
+{
+ struct cl_tx_queue *tx_queue = sw_txhdr->tx_queue;
+ struct cl_sta *cl_sta = sw_txhdr->cl_sta;
+
+ tx_queue->total_packets++;
+
+ if (cl_txq_is_fw_full(tx_queue) || (cl_sta && cl_sta->pause_tx)) {
+ /* If firmware is full push the packet to the queue */
+ cl_txq_push(cl_hw, sw_txhdr);
+ } else if (amsdu_anchor && amsdu_anchor->is_sw_amsdu) {
+ cl_txq_push(cl_hw, sw_txhdr);
+ tasklet_schedule(&cl_hw->tx_task);
+ } else if (!list_empty(&tx_queue->hdrs) || cl_hw->tx_db.force_amsdu) {
+ /*
+ * If queue in driver is not empty push the packet to the queue,
+ * and call cl_txq_sched() to transfer packets from the queue to firmware
+ */
+ cl_txq_push(cl_hw, sw_txhdr);
+ cl_txq_sched(cl_hw, tx_queue);
+ } else {
+ /* Push the packet directly to firmware */
+ cl_tx_push(cl_hw, sw_txhdr, tx_queue);
+ }
+}
+
+void cl_tx_push(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr, struct cl_tx_queue *tx_queue)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+ struct ieee80211_key_conf *keyconf = tx_info->control.hw_key;
+ struct cl_sta *cl_sta = sw_txhdr->cl_sta;
+ struct cl_vif *cl_vif = sw_txhdr->cl_vif;
+ u8 tid = sw_txhdr->tid;
+ struct txdesc *txdesc = &sw_txhdr->txdesc;
+ struct tx_host_info *host_info = &txdesc->host_info;
+ struct cl_e2w_txhdr_param *e2w_txhdr_param = &txdesc->e2w_txhdr_param;
+ struct ieee80211_hdr *hdr80211 = sw_txhdr->hdr80211;
+ u8 queue_type = tx_queue->type;
+ bool is_mgmt = ieee80211_is_mgmt(sw_txhdr->fc);
+
+ if (cl_key_is_cipher_ccmp_gcmp(keyconf)) {
+ /*
+ * In case of CCMP or GCMP encryption we need to inc pn.
+ * In case of amsdu/header_conversion we need to pass it to firmware as well
+ */
+ u64 pn = atomic64_inc_return(&keyconf->tx_pn);
+
+ if (txdesc->e2w_natt_param.hdr_conv_enable) {
+ memcpy(&e2w_txhdr_param->encrypt_pn, &pn, CL_CCMP_GCMP_PN_SIZE);
+ } else {
+ u8 hdrlen = ieee80211_hdrlen(sw_txhdr->fc);
+
+ cl_key_ccmp_gcmp_pn_to_hdr((u8 *)hdr80211 + hdrlen, pn, keyconf->keyidx);
+ }
+ }
+
+ if (queue_type == QUEUE_TYPE_AGG) {
+ struct cl_baw *baw = &cl_sta->baws[tid];
+ bool is_amsdu = cl_tx_ctrl_is_amsdu(tx_info);
+
+ if (is_amsdu) {
+ struct cl_amsdu_ctrl *amsdu_anchor = &cl_sta->amsdu_anchor[tid];
+
+ if (sw_txhdr->is_sw_amsdu) {
+ u8 pkt_cnt = sw_txhdr->sw_amsdu_packet_cnt;
+
+ if (pkt_cnt == 1)
+ cl_tx_amsdu_unset(sw_txhdr); /* Clear AMSDU bit. */
+
+ if (hdr80211)
+ hdr80211->seq_ctrl = cpu_to_le16(baw->tid_seq);
+
+ tx_queue->stats_sw_amsdu_cnt[pkt_cnt - 1]++;
+ } else {
+ u8 pkt_cnt = host_info->packet_cnt;
+
+ if (pkt_cnt == 1)
+ cl_tx_amsdu_unset(sw_txhdr); /* Clear AMSDU bit. */
+
+ tx_queue->stats_hw_amsdu_cnt[pkt_cnt - 1]++;
+ }
+
+ /* Reset anchor if needed */
+ if (amsdu_anchor->sw_txhdr == sw_txhdr)
+ cl_tx_amsdu_anchor_init(amsdu_anchor);
+ }
+
+ if (hdr80211)
+ hdr80211->seq_ctrl = cpu_to_le16(baw->tid_seq);
+
+ /* Update sequence number and increase it */
+ e2w_txhdr_param->seq_ctrl = cpu_to_le16(baw->tid_seq);
+ baw->tid_seq = INC_SN(baw->tid_seq);
+
+ } else {
+ /*
+ * Update sequence number and increase it
+ * Management sequence number is set by firmware.
+ */
+ if (!is_mgmt) {
+ hdr80211->seq_ctrl |= cpu_to_le16(cl_vif->sequence_number);
+ cl_vif->sequence_number = INC_SN(cl_vif->sequence_number);
+ } else {
+ if (ieee80211_vif_is_mesh(cl_vif->vif)) {
+ struct ieee80211_mgmt *mgmt = (void *)sw_txhdr->skb->data;
+ u16 capab;
+
+ if (mgmt->u.action.u.addba_req.action_code ==
+ WLAN_ACTION_ADDBA_RESP) {
+ capab = le16_to_cpu(mgmt->u.action.u.addba_resp.capab);
+ capab &= ~IEEE80211_ADDBA_PARAM_AMSDU_MASK;
+ mgmt->u.action.u.addba_resp.capab = cpu_to_le16(capab);
+ }
+ }
+ }
+ }
+
+ cl_drv_ops_pkt_fw_send(cl_hw, sw_txhdr, tx_queue);
+}
+
+void cl_tx_single_free_skb(struct cl_hw *cl_hw, struct sk_buff *skb)
+{
+ if (IEEE80211_SKB_CB(skb)->ack_frame_id)
+ ieee80211_tx_status(cl_hw->hw, skb);
+ else
+ dev_kfree_skb_any(skb);
+}
+
+void cl_tx_single(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, bool is_vns, bool lock)
+{
+ struct cl_tx_queue *tx_queue;
+ struct cl_sw_txhdr *sw_txhdr;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct cl_vif *cl_vif = TX_INFO_TO_CL_VIF(cl_hw, tx_info);
+ struct ieee80211_hdr *hdr80211 = (struct ieee80211_hdr *)skb->data;
+ u8 hdr_pads = CL_SKB_DATA_ALIGN_PADS(hdr80211);
+ __le16 fc = hdr80211->frame_control;
+ u16 frame_len = (u16)skb->len;
+ u8 tid = ieee80211_is_data_qos(fc) ? ieee80211_get_tid(hdr80211) : 0;
+ u8 ac = tid_to_ac[tid];
+ bool is_beacon = ieee80211_is_beacon(fc);
+
+ cl_tx_cpu_single(cl_hw);
+
+ if (unlikely(!test_bit(CL_DEV_STARTED, &cl_hw->drv_flags) ||
+ test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags))) {
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_hw->tx_packet_cntr.drop.dev_flags++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ if (unlikely(!cl_vif->tx_en || cl_hw->tx_disable_flags)) {
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_hw->tx_packet_cntr.drop.tx_disable++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ /* Check if packet length exceeds max size */
+ if (unlikely(frame_len > CL_TX_MAX_FRAME_LEN_SINGLE)) {
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_dbg_err(cl_hw, "frame_len (%u) exceeds max size\n", frame_len);
+ cl_hw->tx_packet_cntr.drop.length_limit++;
+ cl_vif->trfc_cntrs[ac].tx_errors++;
+ return;
+ }
+
+ if (cl_sta && cl_sta->key_disable) {
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_hw->tx_packet_cntr.drop.key_disable++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ /* Allocate sw_txhdr */
+ sw_txhdr = cl_sw_txhdr_alloc(cl_hw);
+
+ if (unlikely(!sw_txhdr)) {
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_dbg_verbose(cl_hw, "sw_txhdr alloc failed\n");
+ cl_hw->tx_packet_cntr.drop.txhdr_alloc_fail++;
+ cl_vif->trfc_cntrs[ac].tx_errors++;
+ return;
+ }
+
+ /* Prepare sw_txhdr */
+ sw_txhdr->hdr80211 = hdr80211;
+ sw_txhdr->hw_queue = tx_info->hw_queue;
+ sw_txhdr->is_bcn = is_beacon;
+ sw_txhdr->skb = skb;
+ sw_txhdr->map_len = frame_len + hdr_pads;
+ sw_txhdr->fc = fc;
+ sw_txhdr->cl_vif = cl_vif;
+ sw_txhdr->tid = tid;
+ sw_txhdr->ac = ac;
+
+ if (cl_sta) {
+ sw_txhdr->cl_sta = cl_sta;
+ sw_txhdr->sta_idx = cl_sta->sta_idx;
+ } else {
+ sw_txhdr->cl_sta = NULL;
+ sw_txhdr->sta_idx = STA_IDX_INVALID;
+ }
+
+ /* Prepare txdesc */
+ cl_tx_single_prep(cl_hw, sw_txhdr, frame_len, hdr_pads, is_vns);
+
+ /*
+ * Fetch the driver queue.
+ * IEEE80211_TX_CTL_AMPDU is not set in tx_info->flags, otherwise cl_tx_agg()
+ * would have been called and not cl_tx_single().
+ * Therefore there is no need to check if tx_queue is NULL or if queue type
+ * is QUEUE_TYPE_AGG.
+ */
+ tx_queue = cl_txq_get(cl_hw, sw_txhdr);
+ if (!tx_queue) {
+ cl_tx_single_free_skb(cl_hw, skb);
+ cl_dbg_verbose(cl_hw, "tx_queue is NULL\n");
+ cl_vif->trfc_cntrs[ac].tx_errors++;
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ return;
+ }
+
+ sw_txhdr->tx_queue = tx_queue;
+
+ if (lock) {
+ if (tx_queue->type == QUEUE_TYPE_BCMC) {
+ /*
+ * All other broadcast/multicast packets are buffered in
+ * ieee80211_tx_h_multicast_ps_buf() and will follow the beacon.
+ */
+ spin_lock_bh(&cl_hw->tx_lock_bcmc);
+ cl_tx_send(cl_hw, sw_txhdr, NULL);
+ spin_unlock_bh(&cl_hw->tx_lock_bcmc);
+ } else {
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ cl_tx_send(cl_hw, sw_txhdr, NULL);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+ }
+ } else {
+ cl_tx_send(cl_hw, sw_txhdr, NULL);
+ }
+}
+
+void cl_tx_fast_single(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, bool lock)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+
+ /* hw_key must be set before calling cl_tx_8023_to_wlan() */
+ tx_info->control.hw_key = cl_key_get(cl_sta);
+
+ /* Convert 802.3 to 802.11 header */
+ if (cl_tx_8023_to_wlan(cl_hw, skb, cl_sta, tid) == 0) {
+ bool is_vns = cl_vns_is_very_near(cl_hw, cl_sta, skb);
+ u8 ac = tid_to_ac[tid];
+
+ tx_info->hw_queue = ac;
+ tx_info->control.vif = cl_sta->cl_vif->vif;
+
+ cl_hw->tx_packet_cntr.forward.drv_fast_single++;
+
+ cl_tx_single(cl_hw, cl_sta, skb, is_vns, lock);
+ }
+}
+
+void cl_tx_agg_prep(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr,
+ u16 frame_len, u8 hdr_pads, bool hdr_conv)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+ struct ieee80211_key_conf *key_conf = tx_info->control.hw_key;
+ struct txdesc *txdesc = &sw_txhdr->txdesc;
+ struct lmacapi *umacdesc = &txdesc->umacdesc;
+ struct tx_host_info *host_info = &txdesc->host_info;
+ u16 qos_ctrl = sw_txhdr->tid;
+
+ /* Reset txdesc */
+ memset(txdesc, 0, sizeof(struct txdesc));
+
+ txdesc->e2w_result.tid = sw_txhdr->tid;
+ txdesc->e2w_result.is_txinject = false;
+ txdesc->e2w_natt_param.sta_index = sw_txhdr->sta_idx;
+ txdesc->e2w_natt_param.ampdu = true;
+ txdesc->e2w_natt_param.hdr_conv_enable = hdr_conv;
+
+ if (hdr_conv) {
+ if (cl_tx_ctrl_is_amsdu(tx_info))
+ qos_ctrl |= IEEE80211_QOS_CTL_A_MSDU_PRESENT;
+
+ txdesc->e2w_txhdr_param.frame_ctrl = sw_txhdr->fc;
+ txdesc->e2w_txhdr_param.qos_ctrl = cpu_to_le16(qos_ctrl);
+ }
+
+ if (hdr_pads)
+ host_info->host_padding |= BIT(0);
+
+ /* Vif_index must be filled in even without header conversion */
+ host_info->vif_index = sw_txhdr->cl_sta->cl_vif->vif_index;
+
+ /* Set the expected_ack flag */
+ host_info->expected_ack = (tx_info->flags & IEEE80211_TX_CTL_NO_ACK) ?
+ EXPECTED_NO_ACK : EXPECTED_ACK;
+
+ if (key_conf) {
+ host_info->is_protected = true;
+ host_info->hw_key_idx = key_conf->hw_key_idx;
+
+ if (!hdr_conv)
+ frame_len += key_conf->icv_len;
+ }
+
+ host_info->packet_cnt = 1;
+ umacdesc->packet_len[0] = cpu_to_le16(frame_len);
+
+ /* Set rate control */
+ cl_rate_ctrl_update_desc_agg(cl_hw, host_info);
+}
+
+static __le16 cl_tx_agg_frame_control(struct cl_vif *cl_vif,
+ struct ieee80211_key_conf *key_conf,
+ u8 *hdrlen)
+{
+ struct ieee80211_vif *vif = cl_vif->vif;
+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
+ enum nl80211_iftype type = vif->type;
+ __le16 fc = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_DATA);
+
+ if (type == NL80211_IFTYPE_AP) {
+ fc |= cpu_to_le16(IEEE80211_FCTL_FROMDS);
+ *hdrlen = 26;
+ } else if (type == NL80211_IFTYPE_STATION) {
+ fc |= cpu_to_le16(IEEE80211_FCTL_TODS);
+
+ if (sdata->u.mgd.use_4addr) {
+ fc |= cpu_to_le16(IEEE80211_FCTL_FROMDS);
+ *hdrlen = 32;
+ } else {
+ *hdrlen = 26;
+ }
+ }
+
+ if (key_conf)
+ fc |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+
+ return fc;
+}
+
+static void _cl_tx_agg(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, bool hdr_conv)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_key_conf *key_conf = tx_info->control.hw_key;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+ struct cl_tx_queue *tx_queue = NULL;
+ struct cl_vif *cl_vif = cl_sta->cl_vif;
+ u16 frame_len = (u16)skb->len;
+ u16 total_frame_len = 0;
+ u8 hdr_pads = CL_SKB_DATA_ALIGN_PADS(skb->data);
+ u8 is_amsdu = cl_tx_ctrl_is_amsdu(tx_info);
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+ u8 ac = tid_to_ac[tid];
+ u8 hdrlen = 0;
+
+ cl_tx_cpu_agg(cl_hw);
+
+ if (unlikely(!test_bit(CL_DEV_STARTED, &cl_hw->drv_flags) ||
+ test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags))) {
+ kfree_skb(skb);
+ cl_hw->tx_packet_cntr.drop.dev_flags++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ if (unlikely(!cl_vif->tx_en || cl_hw->tx_disable_flags)) {
+ kfree_skb(skb);
+ cl_hw->tx_packet_cntr.drop.tx_disable++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ /* Check if packet length exceeds max size */
+ if (unlikely(frame_len > CL_TX_MAX_FRAME_LEN_AGG)) {
+ kfree_skb(skb);
+ cl_dbg_err(cl_hw, "frame_len exceeds max size %d\n", frame_len);
+ cl_hw->tx_packet_cntr.drop.length_limit++;
+ cl_vif->trfc_cntrs[ac].tx_errors++;
+ return;
+ }
+
+ if (cl_sta->key_disable) {
+ kfree_skb(skb);
+ cl_hw->tx_packet_cntr.drop.key_disable++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ /* Check if amsdu is enable for current skb */
+ if (is_amsdu) {
+ enum cl_amsdu_result amsdu_res = cl_tx_amsdu_set(cl_hw, cl_sta, skb, tid);
+
+ switch (amsdu_res) {
+ case CL_AMSDU_SKIP:
+ is_amsdu = false;
+ tx_info->control.flags &= ~IEEE80211_TX_CTRL_AMSDU;
+ case CL_AMSDU_ANCHOR_SET:
+ /*
+ * If new anchor was set, or AMSDU is
+ * skipped continue building sw_txhdr
+ */
+ break;
+ case CL_AMSDU_SUB_FRAME_SET:
+ cl_tx_sub_frame_set(cl_sta, tid);
+ fallthrough;
+ case CL_AMSDU_FAILED:
+ default:
+ return;
+ }
+ } else {
+ /*
+ * If not amsdu & anchor exist. reset current anchor
+ * in order to avoid reordring packets.
+ */
+ if (cl_sta->amsdu_anchor[tid].sw_txhdr)
+ cl_tx_amsdu_anchor_init(&cl_sta->amsdu_anchor[tid]);
+ }
+
+ /* Allocate sw_txhdr */
+ sw_txhdr = cl_sw_txhdr_alloc(cl_hw);
+ if (unlikely(!sw_txhdr)) {
+ kfree_skb(skb);
+ cl_dbg_err(cl_hw, "sw_txhdr alloc failed\n");
+ cl_hw->tx_packet_cntr.drop.txhdr_alloc_fail++;
+ cl_vif->trfc_cntrs[ac].tx_errors++;
+ return;
+ }
+
+ if (cl_vif->vif->type == NL80211_IFTYPE_MESH_POINT)
+ tx_info->hw_queue = ac;
+
+ /* Fill sw_txhdr */
+ sw_txhdr->tid = tid;
+ sw_txhdr->ac = ac;
+ sw_txhdr->hw_queue = tx_info->hw_queue;
+ sw_txhdr->cl_sta = cl_sta;
+ sw_txhdr->sta_idx = cl_sta->sta_idx;
+ sw_txhdr->is_bcn = 0;
+ sw_txhdr->skb = skb;
+ sw_txhdr->map_len = frame_len + hdr_pads;
+ sw_txhdr->cl_vif = cl_vif;
+
+ if (cl_sta->amsdu_anchor[tid].is_sw_amsdu) {
+ sw_txhdr->is_sw_amsdu = true;
+ sw_txhdr->sw_amsdu_packet_cnt = 1;
+ } else {
+ sw_txhdr->is_sw_amsdu = false;
+ }
+
+ if (hdr_conv) {
+ sw_txhdr->hdr80211 = NULL;
+ sw_txhdr->fc = cl_tx_agg_frame_control(cl_vif, key_conf, &hdrlen);
+ } else {
+ struct ieee80211_hdr *hdr80211 = (struct ieee80211_hdr *)skb->data;
+ __le16 fc = hdr80211->frame_control;
+
+ sw_txhdr->hdr80211 = hdr80211;
+ sw_txhdr->fc = fc;
+ hdrlen = ieee80211_hdrlen(fc);
+ }
+
+ /* Fetch the relevant agg queue */
+ tx_queue = cl_sta->agg_tx_queues[tid];
+
+ if (unlikely(!tx_queue)) {
+ kfree_skb(skb);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ cl_dbg_err(cl_hw, "tx_queue is NULL [sta_idx = %u] [tid = %u]\n",
+ cl_sta->sta_idx, tid);
+ cl_hw->tx_packet_cntr.drop.queue_null++;
+ cl_vif->trfc_cntrs[ac].tx_dropped++;
+ return;
+ }
+
+ sw_txhdr->tx_queue = tx_queue;
+
+ total_frame_len = frame_len + hdrlen - sizeof(struct ethhdr);
+
+ if (key_conf)
+ total_frame_len += key_conf->icv_len;
+
+ /* Prepare txdesc */
+ cl_tx_agg_prep(cl_hw, sw_txhdr, frame_len, hdr_pads, hdr_conv);
+
+ /*
+ * AMSDU - first sub frame
+ * !!! Must be done after calling cl_tx_agg_prep() !!!
+ */
+ if (is_amsdu)
+ cl_tx_amsdu_first_sub_frame(sw_txhdr, cl_sta, skb, tid);
+
+ cl_tx_send(cl_hw, sw_txhdr, &cl_sta->amsdu_anchor[tid]);
+}
+
+void cl_tx_agg(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, bool hdr_conv, bool lock)
+{
+ if (lock) {
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+ _cl_tx_agg(cl_hw, cl_sta, skb, hdr_conv);
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+ } else {
+ _cl_tx_agg(cl_hw, cl_sta, skb, hdr_conv);
+ }
+}
+
+static void cl_tx_reset_session_timer(struct ieee80211_sta *sta, u8 tid)
+{
+ struct tid_ampdu_tx *tid_tx = NULL;
+ struct sta_info *stainfo = IEEE80211_STA_TO_STAINFO(sta);
+
+ tid_tx = rcu_dereference(stainfo->ampdu_mlme.tid_tx[tid]);
+
+ if (tid_tx && tid_tx->timeout)
+ tid_tx->last_tx = jiffies;
+}
+
+void cl_tx_fast_agg(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, bool lock)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = cl_sta->cl_vif->vif;
+ u16 ac = skb_get_queue_mapping(skb);
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+
+ tx_info->control.vif = vif;
+ tx_info->control.hw_key = cl_key_get(cl_sta);
+ tx_info->hw_queue = vif->hw_queue[ac];
+ tx_info->flags |= IEEE80211_TX_CTL_AMPDU;
+
+ if (cl_sta->baws[tid].amsdu &&
+ (cl_wrs_api_get_tx_sta_data_rate(cl_sta) > cl_hw->conf->ci_tx_amsdu_min_data_rate))
+ tx_info->control.flags |= IEEE80211_TX_CTRL_AMSDU;
+
+ cl_tx_agg(cl_hw, cl_sta, skb, true, lock);
+ cl_tx_reset_session_timer(cl_sta->sta, tid);
+ cl_hw->tx_packet_cntr.forward.drv_fast_agg++;
+}
+
+void cl_tx_wlan_to_8023(struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ethhdr tmp_eth;
+ struct ethhdr *ehdr;
+ struct {
+ u8 hdr[ETH_ALEN]__aligned(2);
+ __be16 proto;
+ } payload;
+ u16 hdrlen = ieee80211_hdrlen(hdr->frame_control);
+ u8 enc_len = cl_key_get_cipher_len(skb);
+
+ cl_mac_addr_copy(tmp_eth.h_dest, ieee80211_get_DA(hdr));
+ cl_mac_addr_copy(tmp_eth.h_source, ieee80211_get_SA(hdr));
+ skb_copy_bits(skb, hdrlen, &payload, sizeof(payload));
+ tmp_eth.h_proto = payload.proto;
+
+ if (enc_len) {
+ memcpy(skb->data + hdrlen,
+ skb->data + hdrlen + enc_len,
+ skb->len - hdrlen - enc_len);
+ skb_trim(skb, skb->len - enc_len);
+ }
+
+ if (likely((ether_addr_equal(payload.hdr, rfc1042_header) &&
+ tmp_eth.h_proto != htons(ETH_P_AARP) &&
+ tmp_eth.h_proto != htons(ETH_P_IPX)) ||
+ ether_addr_equal(payload.hdr, bridge_tunnel_header)))
+ /* Remove RFC1042 or Bridge-Tunnel encapsulation and replace ether_type */
+ hdrlen += ETH_ALEN + 2;
+ else
+ tmp_eth.h_proto = htons(skb->len - hdrlen);
+
+ skb_pull(skb, hdrlen);
+ ehdr = skb_push(skb, sizeof(struct ethhdr));
+ memcpy(ehdr, &tmp_eth, sizeof(tmp_eth));
+}
+
+u16 cl_tx_prepare_wlan_hdr(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, struct ieee80211_hdr *hdr)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ u16 hdrlen = 0;
+ __le16 fc = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA);
+ struct ieee80211_vif *vif = cl_sta->cl_vif->vif;
+
+ if (tx_info->control.hw_key)
+ fc |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_AP:
+ fc |= cpu_to_le16(IEEE80211_FCTL_FROMDS);
+ /* DA BSSID SA */
+ memcpy(hdr->addr1, skb->data, ETH_ALEN);
+ memcpy(hdr->addr2, vif->addr, ETH_ALEN);
+ memcpy(hdr->addr3, skb->data + ETH_ALEN, ETH_ALEN);
+ hdrlen = 24;
+ break;
+ case NL80211_IFTYPE_STATION:
+ {
+ struct wireless_dev *wdev = skb->dev->ieee80211_ptr;
+
+ if (wdev && wdev->use_4addr) {
+ fc |= cpu_to_le16(IEEE80211_FCTL_FROMDS |
+ IEEE80211_FCTL_TODS);
+ /* RA TA DA SA */
+ memcpy(hdr->addr1, vif->bss_conf.bssid, ETH_ALEN);
+ memcpy(hdr->addr2, vif->addr, ETH_ALEN);
+ memcpy(hdr->addr3, skb->data, ETH_ALEN);
+ memcpy(hdr->addr4, skb->data + ETH_ALEN, ETH_ALEN);
+ hdrlen = 30;
+ } else {
+ fc |= cpu_to_le16(IEEE80211_FCTL_TODS);
+ /* BSSID SA DA */
+ memcpy(hdr->addr1, vif->bss_conf.bssid, ETH_ALEN);
+ memcpy(hdr->addr2, skb->data + ETH_ALEN, ETH_ALEN);
+ memcpy(hdr->addr3, skb->data, ETH_ALEN);
+ hdrlen = 24;
+ }
+ }
+ break;
+ case NL80211_IFTYPE_MESH_POINT:
+ cl_dbg_trace(cl_hw, "vif type mesh_point, invalid tx path\n");
+ return 0;
+ default:
+ cl_dbg_err(cl_hw, "Unknown vif type %d !!!\n", vif->type);
+ return 0;
+ }
+
+ if (cl_sta->sta->wme) {
+ fc |= cpu_to_le16(IEEE80211_STYPE_QOS_DATA);
+ hdrlen += 2;
+ }
+
+ hdr->frame_control = fc;
+ hdr->duration_id = 0;
+ hdr->seq_ctrl = 0;
+
+ return hdrlen;
+}
+
+int cl_tx_8023_to_wlan(struct cl_hw *cl_hw, struct sk_buff *skb, struct cl_sta *cl_sta, u8 tid)
+{
+ struct ieee80211_hdr hdr;
+ int head_need, ret = 0;
+ u16 ethertype, hdrlen;
+ const u8 *encaps_data = NULL;
+ int encaps_len = 0, skip_header_bytes = ETH_HLEN;
+ u8 enc_len = cl_key_get_cipher_len(skb);
+
+ /* Convert Ethernet header to proper 802.11 header */
+ ethertype = (skb->data[12] << 8) | skb->data[13];
+
+ hdrlen = cl_tx_prepare_wlan_hdr(cl_hw, cl_sta, skb, &hdr);
+ if (!hdrlen) {
+ ret = -EINVAL;
+ goto free;
+ }
+
+ if (ethertype >= ETH_P_802_3_MIN) {
+ encaps_data = rfc1042_header;
+ encaps_len = sizeof(rfc1042_header);
+ skip_header_bytes -= 2;
+ }
+
+ skb_pull(skb, skip_header_bytes);
+ head_need = hdrlen + enc_len + encaps_len - skb_headroom(skb);
+
+ if (head_need > 0) {
+ head_need = ((head_need + 3) & ~3);
+ if (pskb_expand_head(skb, head_need, 0, GFP_ATOMIC)) {
+ ret = -ENOMEM;
+ goto free;
+ }
+ }
+
+ if (encaps_data)
+ memcpy(skb_push(skb, encaps_len), encaps_data, encaps_len);
+
+ skb_push(skb, hdrlen + enc_len);
+
+ if (cl_sta->sta->wme) {
+ u16 qos_ctrl = tid;
+
+ memcpy(skb->data, &hdr, hdrlen - 2);
+ memcpy(skb->data + hdrlen - 2, &qos_ctrl, 2);
+ } else {
+ memcpy(skb->data, &hdr, hdrlen);
+ }
+
+ skb_reset_mac_header(skb);
+
+ return ret;
+free:
+ cl_hw->tx_packet_cntr.drop.build_hdr_fail++;
+ cl_sta->cl_vif->trfc_cntrs[tid_to_ac[tid]].tx_errors++;
+ kfree_skb(skb);
+ skb = NULL;
+
+ return ret;
+}
+
+void cl_tx_check_start_ba_session(struct cl_hw *cl_hw,
+ struct ieee80211_sta *sta,
+ struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct sta_info *stainfo = IEEE80211_STA_TO_STAINFO(sta);
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ u8 tid;
+
+ /* TODO: What about HE? */
+ if (!sta->ht_cap.ht_supported &&
+ !sta->vht_cap.vht_supported &&
+ !cl_band_is_6g(cl_hw))
+ return;
+
+ if (test_sta_flag(stainfo, WLAN_STA_PS_STA))
+ return;
+
+ if ((tx_info->flags & IEEE80211_TX_CTL_AMPDU) &&
+ !(tx_info->flags & IEEE80211_TX_STAT_AMPDU))
+ return;
+
+ if (cl_tx_ctrl_is_eapol(tx_info))
+ return;
+
+ if (unlikely(!ieee80211_is_data_qos(hdr->frame_control)))
+ return;
+
+ if (unlikely(skb->protocol == cpu_to_be16(ETH_P_PAE)))
+ return;
+
+ tid = ieee80211_get_tid(hdr);
+
+ if (likely(stainfo->ampdu_mlme.tid_tx[tid]))
+ return;
+
+ ieee80211_start_tx_ba_session(sta, tid, cl_hw->conf->ce_tx_ba_session_timeout);
+}
+
+static void cl_tx_handle_beacon_tim(struct ieee80211_hw *hw, struct sk_buff *skb)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)hw->priv;
+ struct cl_sta *cl_sta = NULL;
+ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
+ const u8 *tim_ie = cfg80211_find_ie(WLAN_EID_TIM, mgmt->u.beacon.variable, skb->len);
+ struct ieee80211_tim_ie *tim = NULL;
+
+ /* Offset of the element */
+ tim = (void *)(tim_ie + BCN_IE_TIM_BIT_OFFSET);
+
+ cl_sta_lock(cl_hw);
+
+ /* Loop through all STA's */
+ list_for_each_entry(cl_sta, &cl_hw->cl_sta_db.head, list) {
+ if (cl_traffic_is_sta_tx_exist(cl_hw, cl_sta)) {
+ u8 sta_aid = cl_sta->sta->aid;
+ u8 map_index = sta_aid / BITS_PER_BYTE;
+
+ /* Update STA's AID in TIM bit */
+ tim->virtual_map[map_index] |= BIT(sta_aid % BITS_PER_BYTE);
+ }
+ }
+
+ cl_sta_unlock(cl_hw);
+}
+
+static struct sk_buff *cl_tx_beacon_get(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct sk_buff *skb = NULL;
+
+ skb = ieee80211_beacon_get(hw, vif);
+
+ /* Handle beacon TIM bitmap */
+ if (skb)
+ cl_tx_handle_beacon_tim(hw, skb);
+
+ return skb;
+}
+
+static void cl_tx_mc(struct cl_vif *cl_vif, int *mc_fw_free)
+{
+ struct cl_hw *cl_hw = cl_vif->cl_hw;
+ struct ieee80211_vif *vif = cl_vif->vif;
+ struct sk_buff *skb = NULL;
+ struct ieee80211_tx_info *tx_info;
+
+ if (unlikely(!vif))
+ return;
+
+ while (((*mc_fw_free) > 0) &&
+ (skb = ieee80211_get_buffered_bc(cl_hw->hw, vif))) {
+ /* Route this MCBC frame to the BCN ipc queue */
+ tx_info = IEEE80211_SKB_CB(skb);
+ tx_info->hw_queue = CL_HWQ_BCN;
+
+ (*mc_fw_free)--;
+
+ /* Clear more data bit if this is the last frame in this SP */
+ if (*mc_fw_free == 0) {
+ struct ieee80211_hdr *hdr =
+ (struct ieee80211_hdr *)skb->data;
+ hdr->frame_control &=
+ cpu_to_le16(~IEEE80211_FCTL_MOREDATA);
+ }
+
+ cl_tx_single(cl_hw, NULL, skb, false, true);
+ }
+}
+
+void cl_tx_bcn_mesh_task(unsigned long data)
+{
+ struct cl_vif *cl_vif = NULL;
+ struct cl_hw *cl_hw = NULL;
+ struct ieee80211_tx_info *tx_info;
+ struct sk_buff *skb;
+ int mc_fw_free;
+
+ cl_vif = (struct cl_vif *)data;
+ if (!cl_vif)
+ return;
+
+ cl_hw = cl_vif->cl_hw;
+
+ if (!cl_hw || !cl_vif->vif || cl_vif->vif->type != NL80211_IFTYPE_MESH_POINT ||
+ cl_radio_is_off(cl_hw) ||
+ cl_recovery_in_progress(cl_hw) ||
+ !test_bit(CL_DEV_STARTED, &cl_hw->drv_flags) ||
+ test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags) ||
+ cl_hw->tx_disable_flags)
+ return;
+
+ skb = cl_tx_beacon_get(cl_hw->hw, cl_vif->vif);
+ if (!skb)
+ return;
+
+ /* Route this BCN to the BCN ipc queue */
+ tx_info = IEEE80211_SKB_CB(skb);
+ tx_info->hw_queue = CL_HWQ_BCN;
+
+ cl_tx_single(cl_hw, NULL, skb, false, true);
+
+ mc_fw_free = cl_hw->tx_queues->bcmc.fw_free_space;
+ cl_tx_mc(cl_vif, &mc_fw_free);
+}
+
+static void cl_tx_bcn(struct cl_vif *cl_vif)
+{
+ struct cl_hw *cl_hw = cl_vif->cl_hw;
+ struct ieee80211_vif *vif = cl_vif->vif;
+ struct ieee80211_tx_info *tx_info;
+ struct sk_buff *skb;
+
+ if (!vif || vif->type != NL80211_IFTYPE_AP)
+ return;
+
+ /*
+ * If we are in the middle of the CAC, we allow regular channel switch
+ * and retrigger the CAC (If needed).
+ * Or - if radar is detected, we wait for all CSAs to be transmitted,
+ * before allowing channel switch
+ */
+ if (cl_dfs_is_in_cac(cl_hw) && vif->csa_active) {
+ ieee80211_csa_finish(vif);
+ return;
+ }
+
+ skb = cl_tx_beacon_get(cl_hw->hw, vif);
+ if (!skb)
+ return;
+
+ /* Route this BCN to the BCN ipc queue */
+ tx_info = IEEE80211_SKB_CB(skb);
+ tx_info->hw_queue = CL_HWQ_BCN;
+
+ cl_tx_single(cl_hw, NULL, skb, false, true);
+}
+
+bool cl_is_tx_allowed(struct cl_hw *cl_hw)
+{
+ return !(cl_radio_is_off(cl_hw) ||
+ cl_hw->vif_db.num_iface_bcn == 0 ||
+ cl_recovery_in_progress(cl_hw) ||
+ cl_hw->tx_db.block_bcn ||
+ cl_hw->tx_disable_flags ||
+ !test_bit(CL_DEV_STARTED, &cl_hw->drv_flags) ||
+ test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags));
+}
+
+/* cl_tx_bcns_tasklet - generate BCNs and TX buffered MC frames each BCN DTIM interval
+ *
+ * Beacons are sent first followed by cyclic MC for fairness between VIF's
+ * the FW buffer is restricted to "IPC_TXDESC_CNT_BCMC" buffer size.
+ */
+void cl_tx_bcns_tasklet(unsigned long data)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)data;
+ struct cl_vif *cl_vif = NULL;
+ int mc_fw_free = 0;
+
+ read_lock(&cl_hw->vif_db.lock);
+
+ if (!cl_is_tx_allowed(cl_hw))
+ goto out;
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list)
+ cl_tx_bcn(cl_vif);
+
+ cl_vif = cl_hw->mc_vif;
+ mc_fw_free = cl_hw->tx_queues->bcmc.fw_free_space;
+
+ do {
+ cl_tx_mc(cl_vif, &mc_fw_free);
+ /* Cl_vif_get_next() is cyclic */
+ cl_vif = cl_vif_get_next(cl_hw, cl_vif);
+ } while ((cl_vif != cl_hw->mc_vif) && mc_fw_free);
+
+ cl_hw->mc_vif = cl_vif_get_next(cl_hw, cl_hw->mc_vif);
+
+out:
+ read_unlock(&cl_hw->vif_db.lock);
+}
+
+void cl_tx_en(struct cl_hw *cl_hw, u8 reason, bool enable)
+{
+ unsigned long tx_disable_flags_prev = cl_hw->tx_disable_flags;
+
+ if (enable) {
+ clear_bit(reason, &cl_hw->tx_disable_flags);
+
+ if (tx_disable_flags_prev != 0 && cl_hw->tx_disable_flags == 0)
+ if (cl_hw->conf->ci_backup_bcn_en)
+ cl_msg_tx_backup_bcn_en(cl_hw, true);
+ } else {
+ set_bit(reason, &cl_hw->tx_disable_flags);
+
+ if (tx_disable_flags_prev == 0)
+ if (cl_hw->conf->ci_backup_bcn_en)
+ cl_msg_tx_backup_bcn_en(cl_hw, false);
+ }
+}
+
+static void cl_tx_flush(struct cl_hw *cl_hw)
+{
+ /* Flush bcmc */
+ spin_lock_bh(&cl_hw->tx_lock_bcmc);
+ cl_bcmc_cfm_flush_queue(cl_hw, NULL);
+ spin_unlock_bh(&cl_hw->tx_lock_bcmc);
+
+ /* Flush single */
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ cl_txq_flush_all_single(cl_hw);
+ cl_single_cfm_flush_all(cl_hw);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+
+ /* Flush agg */
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+ cl_txq_flush_all_agg(cl_hw);
+ cl_agg_cfm_flush_all(cl_hw);
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+}
+
+void cl_tx_off(struct cl_hw *cl_hw)
+{
+ cl_txq_stop(cl_hw);
+ cl_tx_flush(cl_hw);
+}
+
+void cl_tx_drop_skb(struct sk_buff *skb)
+{
+ skb->dev->stats.rx_dropped++;
+ kfree_skb(skb);
+}
+
+#define AGG_POLL_TIMEOUT 50
+
+/*
+ * cl_hw->agg_cfm_queues:
+ * These queues are used to keep pointers to skb's sent
+ * as aggregation and waiting for confirmation.
+ */
+
+void cl_agg_cfm_init(struct cl_hw *cl_hw)
+{
+ int i = 0;
+
+ for (i = 0; i < IPC_MAX_BA_SESSIONS; i++)
+ INIT_LIST_HEAD(&cl_hw->agg_cfm_queues[i].head);
+}
+
+void cl_agg_cfm_add(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr, u8 agg_idx)
+{
+ spin_lock(&cl_hw->tx_lock_cfm_agg);
+ list_add_tail(&sw_txhdr->cfm_list, &cl_hw->agg_cfm_queues[agg_idx].head);
+ spin_unlock(&cl_hw->tx_lock_cfm_agg);
+}
+
+static void cl_agg_cfm_amsdu_free(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ struct cl_amsdu_txhdr *amsdu_txhdr = NULL;
+ struct cl_amsdu_txhdr *tmp = NULL;
+ struct sk_buff *sub_skb = NULL;
+ struct ieee80211_tx_info *tx_info_sub_skb = NULL;
+
+ list_for_each_entry_safe(amsdu_txhdr, tmp, &sw_txhdr->amsdu_txhdr.list, list) {
+ sub_skb = amsdu_txhdr->skb;
+ tx_info_sub_skb = IEEE80211_SKB_CB(sub_skb);
+
+ list_del(&amsdu_txhdr->list);
+ dma_unmap_single(cl_hw->chip->dev, amsdu_txhdr->dma_addr,
+ (size_t)sub_skb->len, DMA_TO_DEVICE);
+ kfree_skb(sub_skb);
+ cl_tx_amsdu_txhdr_free(cl_hw, amsdu_txhdr);
+ }
+}
+
+void cl_agg_cfm_free_head_skb(struct cl_hw *cl_hw,
+ struct cl_agg_cfm_queue *cfm_queue,
+ u8 ba_queue_idx)
+{
+ struct cl_sw_txhdr *sw_txhdr = list_first_entry(&cfm_queue->head,
+ struct cl_sw_txhdr,
+ cfm_list);
+ struct sk_buff *skb = sw_txhdr->skb;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ dma_addr_t dma_addr = le32_to_cpu(sw_txhdr->txdesc.umacdesc.packet_addr[0]);
+
+ if (cl_hw->conf->ci_tx_delay_tstamp_en)
+ cl_tx_update_hist_tstamp(cfm_queue->tx_queue, skb,
+ cfm_queue->tx_queue->hist_push_to_cfm, false);
+
+ dma_unmap_single(cl_hw->chip->dev, dma_addr, sw_txhdr->map_len, DMA_TO_DEVICE);
+
+ /* If amsdu list not empty free sub MSDU frames first, including amsdu_txhdr */
+ if (cl_tx_ctrl_is_amsdu(tx_info))
+ if (!list_empty(&sw_txhdr->amsdu_txhdr.list))
+ cl_agg_cfm_amsdu_free(cl_hw, sw_txhdr);
+
+ consume_skb(skb);
+ list_del(&sw_txhdr->cfm_list);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+}
+
+static void cl_agg_cfm_flush_queue(struct cl_hw *cl_hw, u8 agg_idx)
+{
+ struct cl_agg_cfm_queue *cfm_queue = &cl_hw->agg_cfm_queues[agg_idx];
+ struct cl_tx_queue *tx_queue = cfm_queue->tx_queue;
+ struct sk_buff *skb = NULL;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+ dma_addr_t dma_addr = 0;
+ struct ieee80211_tx_info *tx_info;
+
+ if (!tx_queue)
+ return;
+
+ if (list_empty(&cfm_queue->head))
+ return;
+
+ do {
+ sw_txhdr = list_first_entry(&cfm_queue->head, struct cl_sw_txhdr, cfm_list);
+ skb = sw_txhdr->skb;
+
+ dma_addr = le32_to_cpu(sw_txhdr->txdesc.umacdesc.packet_addr[0]);
+ dma_unmap_single(cl_hw->chip->dev, dma_addr, sw_txhdr->map_len, DMA_TO_DEVICE);
+
+ tx_info = IEEE80211_SKB_CB(skb);
+
+ /* If amsdu list not empty free sub MSDU frames first, including amsdu_txhdr */
+ if (cl_tx_ctrl_is_amsdu(tx_info))
+ if (!list_empty(&sw_txhdr->amsdu_txhdr.list))
+ cl_agg_cfm_amsdu_free(cl_hw, sw_txhdr);
+
+ tx_queue->total_fw_cfm++;
+
+ kfree_skb(skb);
+ list_del(&sw_txhdr->cfm_list);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ } while (!list_empty(&cfm_queue->head));
+
+ /*
+ * Set fw_free_space back to maximum after flushing the queue
+ * and clear the enhanced TIM.
+ */
+ tx_queue->fw_free_space = tx_queue->fw_max_size;
+ cl_enhanced_tim_clear_tx_agg(cl_hw, agg_idx, tx_queue->hw_index,
+ tx_queue->cl_sta, tx_queue->tid);
+
+ cfm_queue->tx_queue = NULL;
+}
+
+void cl_agg_cfm_flush_all(struct cl_hw *cl_hw)
+{
+ int i = 0;
+
+ /* Don't use BH lock, because cl_agg_cfm_flush_all() is called with BH disabled */
+ spin_lock(&cl_hw->tx_lock_cfm_agg);
+
+ for (i = 0; i < IPC_MAX_BA_SESSIONS; i++)
+ cl_agg_cfm_flush_queue(cl_hw, i);
+
+ spin_unlock(&cl_hw->tx_lock_cfm_agg);
+}
+
+static void cl_agg_cfm_poll_timeout(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue,
+ u8 agg_idx, bool flush)
+{
+ /*
+ * When polling failed clear the enhanced TIM so that firmware will
+ * not try to transmit these packets.
+ * If flush is set cl_enhanced_tim_clear_tx_agg() is called inside
+ * cl_agg_cfm_flush_queue().
+ */
+ cl_dbg_err(cl_hw, "Polling timeout (queue_idx = %u)\n", agg_idx);
+
+ spin_lock_bh(&cl_hw->tx_lock_cfm_agg);
+
+ if (flush)
+ cl_agg_cfm_flush_queue(cl_hw, agg_idx);
+ else
+ cl_enhanced_tim_clear_tx_agg(cl_hw, agg_idx, tx_queue->hw_index,
+ tx_queue->cl_sta, tx_queue->tid);
+
+ spin_unlock_bh(&cl_hw->tx_lock_cfm_agg);
+}
+
+void cl_agg_cfm_poll_empty(struct cl_hw *cl_hw, u8 agg_idx, bool flush)
+{
+ struct cl_agg_cfm_queue *cfm_queue = &cl_hw->agg_cfm_queues[agg_idx];
+ bool empty = false;
+ int i = 0;
+
+ if (test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags))
+ return;
+
+ while (true) {
+ spin_lock_bh(&cl_hw->tx_lock_cfm_agg);
+ empty = list_empty(&cfm_queue->head);
+ spin_unlock_bh(&cl_hw->tx_lock_cfm_agg);
+
+ if (empty)
+ return;
+
+ if (++i == AGG_POLL_TIMEOUT) {
+ cl_agg_cfm_poll_timeout(cl_hw, cfm_queue->tx_queue, agg_idx, flush);
+ return;
+ }
+
+ msleep(20);
+ }
+}
+
+void cl_agg_cfm_poll_empty_sta(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ int i = 0;
+ struct cl_tx_queue *tx_queue = NULL;
+
+ for (i = 0; i < IEEE80211_NUM_TIDS; i++) {
+ tx_queue = cl_sta->agg_tx_queues[i];
+
+ if (tx_queue)
+ cl_agg_cfm_poll_empty(cl_hw, tx_queue->index, false);
+ }
+}
+
+void cl_agg_cfm_clear_tim_bit_sta(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ int i = 0;
+ struct cl_tx_queue *tx_queue = NULL;
+
+ for (i = 0; i < IEEE80211_NUM_TIDS; i++) {
+ tx_queue = cl_sta->agg_tx_queues[i];
+
+ if (tx_queue) {
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+ cl_enhanced_tim_clear_tx_agg(cl_hw, tx_queue->index, tx_queue->hw_index,
+ tx_queue->cl_sta, tx_queue->tid);
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+ }
+ }
+}
+
+void cl_agg_cfm_set_ssn(struct cl_hw *cl_hw, u16 ssn, u8 idx)
+{
+ spin_lock_bh(&cl_hw->tx_lock_cfm_agg);
+ cl_hw->agg_cfm_queues[idx].ssn = ssn;
+ spin_unlock_bh(&cl_hw->tx_lock_cfm_agg);
+}
+
+void cl_agg_cfm_set_tx_queue(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue, u8 idx)
+{
+ spin_lock_bh(&cl_hw->tx_lock_cfm_agg);
+ cl_hw->agg_cfm_queues[idx].tx_queue = tx_queue;
+ spin_unlock_bh(&cl_hw->tx_lock_cfm_agg);
+}
+
+static bool cl_is_same_rate(struct cl_agg_tx_report *agg_report,
+ struct cl_wrs_rate_params *rate_params)
+{
+ union cl_rate_ctrl_info rate_ctrl_info = {
+ .word = le32_to_cpu(agg_report->rate_cntrl_info)};
+ u8 mcs = U8_MAX, nss = U8_MAX;
+
+ if (agg_report->bw_requested != rate_params->bw)
+ return false;
+
+ cl_rate_ctrl_parse(&rate_ctrl_info, &nss, &mcs);
+
+ return ((mcs == rate_params->mcs) && (nss == rate_params->nss));
+}
+
+static void cl_sync_tx_rate(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_agg_tx_report *agg_report,
+ struct cl_wrs_info *wrs_info, struct cl_wrs_params *wrs_params)
+{
+ if (!agg_report->is_fallback && cl_is_same_rate(agg_report, &wrs_params->rate_params)) {
+ cl_wrs_api_rate_sync(cl_hw, cl_sta, wrs_params);
+
+ wrs_info->synced = true;
+ wrs_info->quick_rate_check = true;
+ wrs_info->quick_rate_agg_cntr = 0;
+ wrs_info->quick_rate_pkt_cntr = 0;
+ } else {
+ wrs_info->sync_attempts++;
+ }
+}
+
+static void cl_ba_not_received_handler(struct cl_hw *cl_hw, struct cl_wrs_info *wrs_info,
+ struct cl_agg_tx_report *agg_report)
+{
+ /* Ignore 'BA not received' if station is in power-save or if RTS limit was reached */
+ if (agg_report->is_sta_ps || agg_report->is_rts_retry_limit_reached)
+ return;
+
+ /* Count number of consecutive 'BA not received' */
+ wrs_info->ba_not_rcv_consecutive++;
+
+ /* Save longest sequence of consecutive 'BA not received' */
+ if (wrs_info->ba_not_rcv_consecutive > wrs_info->ba_not_rcv_consecutive_max)
+ wrs_info->ba_not_rcv_consecutive_max = wrs_info->ba_not_rcv_consecutive;
+
+ if (cl_hw->wrs_db.ba_not_rcv_collision_filter) {
+ /*
+ * First 'BA not received' - might just be a collision.
+ * Don't add fail to ba_not_rcv but keep aside.
+ * Second consecutive 'BA not received' - not likely to be a collisions.
+ * Add fail to ba_not_rcv including previous fail that was kept aside.
+ * More than two consecutive 'BA not received' - very unlikely to be a collisions.
+ * Add fail to ba_not_rcv.
+ */
+ if (wrs_info->ba_not_rcv_consecutive == 1)
+ wrs_info->fail_prev = agg_report->fail;
+ else if (wrs_info->ba_not_rcv_consecutive == 2)
+ wrs_info->ba_not_rcv += (agg_report->fail + wrs_info->fail_prev);
+ else
+ wrs_info->ba_not_rcv += agg_report->fail;
+ } else {
+ wrs_info->ba_not_rcv += agg_report->fail;
+ }
+}
+
+void cl_agg_tx_report_handler(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_agg_tx_report *agg_report)
+{
+ struct cl_wrs_info *wrs_info = NULL;
+ struct cl_wrs_params *wrs_params = NULL;
+ u8 group_id;
+ bool skip_epr_update = false;
+ union cl_rate_ctrl_info rate_ctrl_info = {
+ .word = le32_to_cpu(agg_report->rate_cntrl_info)};
+
+ wrs_params = &cl_sta->wrs_sta.tx_su_params;
+ wrs_info = &cl_sta->wrs_info_tx_su;
+ group_id = 0;
+
+ /* Retry_count for cl_wlan */
+ cl_sta->retry_count += agg_report->success_after_retry;
+
+ /*
+ * In case of big packets (4300 in VHT and 5400 in HE) and low
+ * rate (BW 20, NSS 1, MCS 0), firmware will increase rate to MCS 1,
+ * and give an indication to driver (set rate_fix_mcs1 in cl_agg_tx_report).
+ * WRS should also move to MCS 1, and give the maximum time
+ * penalty time from MCS 0 toMCS 1.
+ */
+ if (agg_report->rate_fix_mcs1 &&
+ !agg_report->is_fallback &&
+ cl_wrs_api_up_mcs1(cl_hw, cl_sta, wrs_params))
+ return;
+
+ /* WRS sync mechanism */
+ if (!wrs_info->synced)
+ cl_sync_tx_rate(cl_hw, cl_sta, agg_report, wrs_info, wrs_params);
+
+ if (agg_report->bf && cl_sta->bf_db.is_on && !cl_sta->bf_db.synced) {
+ cl_sta->bf_db.synced = true;
+ /* Resetting the WRS UP weights */
+ cl_wrs_api_beamforming_sync(cl_hw, cl_sta);
+ }
+
+ if (agg_report->ba_not_received) {
+ cl_ba_not_received_handler(cl_hw, wrs_info, agg_report);
+ } else {
+ if (!skip_epr_update)
+ wrs_info->fail += agg_report->fail;
+
+ wrs_info->ba_not_rcv_consecutive = 0;
+ }
+
+ if (!skip_epr_update) {
+ u8 mcs = 0, nss = 0, bw = 0;
+ u16 data_rate = 0;
+
+ switch (agg_report->bw_requested) {
+ case CHNL_BW_160:
+ bw = (cl_hw->wrs_db.adjacent_interference20 ||
+ cl_hw->wrs_db.adjacent_interference40 ||
+ cl_hw->wrs_db.adjacent_interference80) ?
+ rate_ctrl_info.field.bw : agg_report->bw_requested;
+ break;
+ case CHNL_BW_80:
+ bw = (cl_hw->wrs_db.adjacent_interference20 ||
+ cl_hw->wrs_db.adjacent_interference40) ?
+ rate_ctrl_info.field.bw : agg_report->bw_requested;
+ break;
+ case CHNL_BW_40:
+ bw = cl_hw->wrs_db.adjacent_interference20 ?
+ rate_ctrl_info.field.bw : agg_report->bw_requested;
+ break;
+ case CHNL_BW_20:
+ bw = agg_report->bw_requested;
+ break;
+ }
+
+ cl_rate_ctrl_parse(&rate_ctrl_info, &nss, &mcs);
+
+ data_rate = cl_data_rates_get_x10(rate_ctrl_info.field.format_mod,
+ bw,
+ nss,
+ mcs,
+ rate_ctrl_info.field.gi);
+
+ wrs_info->epr_acc += ((u64)agg_report->success * data_rate);
+ wrs_info->success += agg_report->success;
+ }
+
+ if (cl_hw->wrs_db.quick_down_en &&
+ wrs_info->quick_rate_check &&
+ cl_is_same_rate(agg_report, &wrs_params->rate_params)) {
+ wrs_info->quick_rate_agg_cntr++;
+ wrs_info->quick_rate_pkt_cntr += (agg_report->success + agg_report->fail);
+
+ if (wrs_info->quick_rate_agg_cntr >= cl_hw->wrs_db.quick_down_agg_thr &&
+ wrs_info->quick_rate_pkt_cntr > cl_hw->wrs_db.quick_down_pkt_thr) {
+ wrs_info->quick_rate_check = false;
+ cl_wrs_api_quick_down_check(cl_hw, cl_sta, wrs_params);
+ }
+ }
+}
+
+void cl_agg_tx_report_simulate_for_single(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_hw_tx_status *status)
+{
+ /* Assign statistics struct */
+ struct cl_agg_tx_report agg_report;
+ union cl_rate_ctrl_info rate_ctrl_info;
+
+ memset(&agg_report, 0, sizeof(struct cl_agg_tx_report));
+
+ agg_report.bf = status->bf;
+ agg_report.success = status->frm_successful;
+ agg_report.fail = status->num_mpdu_retries + (status->frm_successful ? 0 : 1);
+ agg_report.success_after_retry =
+ (status->frm_successful && status->num_mpdu_retries);
+ agg_report.retry_limit_reached = !status->frm_successful;
+ agg_report.success_more_one_retry =
+ (status->frm_successful && (status->num_mpdu_retries > 1));
+ agg_report.sta_idx = cl_sta->sta_idx;
+ agg_report.bw_requested = status->bw_requested;
+
+ rate_ctrl_info.field.bw = status->bw_transmitted;
+ rate_ctrl_info.field.gi = status->gi;
+ rate_ctrl_info.field.format_mod = status->format_mod;
+ rate_ctrl_info.field.mcs_index = status->mcs_index;
+
+ cl_rate_ctrl_convert(&rate_ctrl_info);
+
+ agg_report.rate_cntrl_info = cpu_to_le32(rate_ctrl_info.word);
+ cl_agg_tx_report_handler(cl_hw, cl_sta, &agg_report);
+ cl_stats_update_tx_single(cl_hw, cl_sta, &agg_report);
+}
+
+void cl_baw_init(struct cl_sta *cl_sta)
+{
+ u8 tid;
+
+ for (tid = 0; tid < IEEE80211_NUM_TIDS; tid++)
+ __skb_queue_head_init(&cl_sta->baws[tid].pending);
+}
+
+void cl_baw_start(struct cl_baw *baw, u16 ssn)
+{
+ baw->ssn = ssn;
+ baw->action_start = true;
+}
+
+void cl_baw_operational(struct cl_hw *cl_hw, struct cl_baw *baw,
+ u8 fw_agg_idx, bool amsdu_supported)
+{
+ baw->fw_agg_idx = fw_agg_idx;
+ baw->tid_seq = IEEE80211_SN_TO_SEQ(baw->ssn);
+ baw->action_start = false;
+ baw->amsdu = (cl_hw->txamsdu_en && amsdu_supported);
+}
+
+void cl_baw_stop(struct cl_baw *baw)
+{
+ baw->action_start = false;
+}
+
+void cl_baw_pending_to_agg(struct cl_hw *cl_hw,
+ struct cl_sta *cl_sta,
+ u8 tid)
+{
+ struct cl_baw *baw = &cl_sta->baws[tid];
+ struct sk_buff *skb;
+
+ while (!skb_queue_empty(&baw->pending)) {
+ skb = __skb_dequeue(&baw->pending);
+ cl_tx_fast_agg(cl_hw, cl_sta, skb, false);
+ }
+}
+
+void cl_baw_pending_to_single(struct cl_hw *cl_hw,
+ struct cl_sta *cl_sta,
+ struct cl_baw *baw)
+{
+ struct sk_buff *skb;
+
+ while (!skb_queue_empty(&baw->pending)) {
+ skb = __skb_dequeue(&baw->pending);
+ cl_tx_fast_single(cl_hw, cl_sta, skb, false);
+ }
+}
+
+void cl_baw_pending_purge(struct cl_baw *baw)
+{
+ if (!skb_queue_empty(&baw->pending))
+ __skb_queue_purge(&baw->pending);
+}
+
+/*
+ * cl_hw->bcmc_cfm_queue:
+ * This queue is used to keep pointers to already sent
+ * beacon skb's that are waiting for confirmation.
+ */
+
+static void cl_bcmc_free_sw_txhdr(struct cl_hw *cl_hw,
+ struct cl_sw_txhdr *sw_txhdr)
+{
+ dma_addr_t dma_addr;
+ struct sk_buff *skb = NULL;
+
+ dma_addr = le32_to_cpu(sw_txhdr->txdesc.umacdesc.packet_addr[0]);
+ skb = sw_txhdr->skb;
+
+ dma_unmap_single(cl_hw->chip->dev, dma_addr, sw_txhdr->map_len, DMA_TO_DEVICE);
+ dev_kfree_skb_irq(skb);
+ list_del(&sw_txhdr->cfm_list);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+}
+
+static bool cl_bcmc_is_list_empty_per_vif(struct cl_hw *cl_hw,
+ struct cl_vif *cl_vif)
+{
+ struct cl_single_cfm_queue *cfm_queue = &cl_hw->bcmc_cfm_queue;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+
+ list_for_each_entry(sw_txhdr, &cfm_queue->head, cfm_list)
+ if (sw_txhdr->cl_vif == cl_vif)
+ return false;
+
+ return true;
+}
+
+void cl_bcmc_cfm_init(struct cl_hw *cl_hw)
+{
+ INIT_LIST_HEAD(&cl_hw->bcmc_cfm_queue.head);
+}
+
+void cl_bcmc_cfm_add(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ list_add_tail(&sw_txhdr->cfm_list, &cl_hw->bcmc_cfm_queue.head);
+}
+
+struct cl_sw_txhdr *cl_bcmc_cfm_find(struct cl_hw *cl_hw, dma_addr_t dma_addr, bool keep_in_list)
+{
+ struct cl_single_cfm_queue *cfm_queue = &cl_hw->bcmc_cfm_queue;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+ struct cl_sw_txhdr *tmp = NULL;
+
+ list_for_each_entry_safe(sw_txhdr, tmp, &cfm_queue->head, cfm_list) {
+ if (le32_to_cpu(sw_txhdr->txdesc.umacdesc.packet_addr[0]) == dma_addr) {
+ if (!keep_in_list)
+ list_del(&sw_txhdr->cfm_list);
+
+ return sw_txhdr;
+ }
+ }
+
+ return NULL;
+}
+
+void cl_bcmc_cfm_flush_queue(struct cl_hw *cl_hw,
+ struct cl_vif *cl_vif)
+{
+ struct cl_single_cfm_queue *cfm_queue = &cl_hw->bcmc_cfm_queue;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+ struct cl_sw_txhdr *tmp = NULL;
+
+ /* Only flush the specific cl_vif related confirmations */
+ if (cl_vif) {
+ list_for_each_entry_safe(sw_txhdr, tmp, &cfm_queue->head, cfm_list) {
+ if (sw_txhdr->cl_vif == cl_vif) {
+ cl_bcmc_free_sw_txhdr(cl_hw, sw_txhdr);
+ cl_hw->tx_queues->bcmc.fw_free_space++;
+ }
+ }
+
+ return;
+ }
+
+ while (!list_empty(&cfm_queue->head)) {
+ sw_txhdr = list_first_entry(&cfm_queue->head, struct cl_sw_txhdr, cfm_list);
+ cl_bcmc_free_sw_txhdr(cl_hw, sw_txhdr);
+ }
+
+ /* Set fw_free_space back to maximum after flushing the queue */
+ cl_hw->tx_queues->bcmc.fw_free_space = cl_hw->tx_queues->bcmc.fw_max_size;
+}
+
+void cl_bcmc_cfm_poll_empty_per_vif(struct cl_hw *cl_hw,
+ struct cl_vif *cl_vif)
+{
+ bool empty = false;
+ int i = 0;
+
+ if (test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags))
+ return;
+
+ while (i++ < BCMC_POLL_TIMEOUT) {
+ spin_lock_bh(&cl_hw->tx_lock_bcmc);
+ empty = cl_bcmc_is_list_empty_per_vif(cl_hw, cl_vif);
+ spin_unlock_bh(&cl_hw->tx_lock_bcmc);
+
+ if (empty)
+ return;
+
+ msleep(20);
+ }
+
+ cl_dbg_err(cl_hw, "Polling timeout vif_index %d\n", cl_vif->vif_index);
+
+ spin_lock_bh(&cl_hw->tx_lock_bcmc);
+ cl_bcmc_cfm_flush_queue(cl_hw, cl_vif);
+ spin_unlock_bh(&cl_hw->tx_lock_bcmc);
+}
+
+/*
+ * cl_hw->single_cfm_queues:
+ * These queues are used to keep pointers to skb's sent
+ * as singles and waiting for confirmation.
+ */
+
+#define SINGLE_POLL_TIMEOUT 50
+
+void cl_single_cfm_init(struct cl_hw *cl_hw)
+{
+ int i = 0;
+
+ for (i = 0; i < MAX_SINGLE_QUEUES; i++)
+ INIT_LIST_HEAD(&cl_hw->single_cfm_queues[i].head);
+}
+
+void cl_single_cfm_add(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr, u32 queue_idx)
+{
+ list_add_tail(&sw_txhdr->cfm_list, &cl_hw->single_cfm_queues[queue_idx].head);
+}
+
+struct cl_sw_txhdr *cl_single_cfm_find(struct cl_hw *cl_hw, u32 queue_idx,
+ dma_addr_t dma_addr)
+{
+ struct cl_single_cfm_queue *cfm_queue = NULL;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+ struct cl_sw_txhdr *tmp = NULL;
+
+ if (queue_idx >= MAX_SINGLE_QUEUES)
+ return NULL;
+
+ cfm_queue = &cl_hw->single_cfm_queues[queue_idx];
+
+ list_for_each_entry_safe(sw_txhdr, tmp, &cfm_queue->head, cfm_list)
+ if (le32_to_cpu(sw_txhdr->txdesc.umacdesc.packet_addr[0]) == dma_addr) {
+ list_del(&sw_txhdr->cfm_list);
+
+ return sw_txhdr;
+ }
+
+ return NULL;
+}
+
+static void cl_single_cfm_flush_queue(struct cl_hw *cl_hw, u32 queue_idx)
+{
+ struct cl_single_cfm_queue *cfm_queue = &cl_hw->single_cfm_queues[queue_idx];
+ struct cl_tx_queue *tx_queue = NULL;
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+ struct sk_buff *skb = NULL;
+ struct ieee80211_tx_info *tx_info = NULL;
+ dma_addr_t dma_addr;
+
+ if (list_empty(&cfm_queue->head))
+ return;
+
+ do {
+ sw_txhdr = list_first_entry(&cfm_queue->head, struct cl_sw_txhdr, cfm_list);
+ dma_addr = le32_to_cpu(sw_txhdr->txdesc.umacdesc.packet_addr[0]);
+ skb = sw_txhdr->skb;
+ tx_info = IEEE80211_SKB_CB(skb);
+
+ dma_unmap_single(cl_hw->chip->dev, dma_addr, sw_txhdr->map_len, DMA_TO_DEVICE);
+
+ cl_tx_single_free_skb(cl_hw, skb);
+ list_del(&sw_txhdr->cfm_list);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ } while (!list_empty(&cfm_queue->head));
+
+ /*
+ * Set fw_free_space back to maximum after flushing the queue
+ * and clear the enhanced TIM.
+ */
+ tx_queue = &cl_hw->tx_queues->single[queue_idx];
+ tx_queue->fw_free_space = tx_queue->fw_max_size;
+ cl_enhanced_tim_clear_tx_single(cl_hw, queue_idx, tx_queue->hw_index,
+ false, tx_queue->cl_sta, tx_queue->tid);
+}
+
+void cl_single_cfm_flush_all(struct cl_hw *cl_hw)
+{
+ u32 i = 0;
+
+ for (i = 0; i < MAX_SINGLE_QUEUES; i++)
+ cl_single_cfm_flush_queue(cl_hw, i);
+}
+
+void cl_single_cfm_flush_sta(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ /* Flush all single confirmation queues of this sta, and reset write index */
+ u8 ac;
+ u16 queue_idx;
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+
+ for (ac = 0; ac < AC_MAX; ac++) {
+ queue_idx = QUEUE_IDX(sta_idx, ac);
+ cl_single_cfm_flush_queue(cl_hw, queue_idx);
+
+ cl_hw->ipc_env->ring_indices_elem->indices->txdesc_write_idx.single[queue_idx] = 0;
+ }
+
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+}
+
+static void cl_single_cfm_poll_timeout(struct cl_hw *cl_hw, u32 queue_idx)
+{
+ /*
+ * When polling failed clear the enhanced TIM so that firmware will
+ * not try to transmit these packets.
+ */
+ struct cl_tx_queue *tx_queue = &cl_hw->tx_queues->single[queue_idx];
+
+ cl_dbg_err(cl_hw, "Polling timeout (queue_idx = %u)\n", queue_idx);
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ cl_enhanced_tim_clear_tx_single(cl_hw, queue_idx, tx_queue->hw_index,
+ false, tx_queue->cl_sta, tx_queue->tid);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+}
+
+void cl_single_cfm_poll_empty(struct cl_hw *cl_hw, u32 queue_idx)
+{
+ struct cl_single_cfm_queue *cfm_queue = &cl_hw->single_cfm_queues[queue_idx];
+ bool empty = false;
+ int i = 0;
+
+ if (test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags))
+ return;
+
+ while (true) {
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ empty = list_empty(&cfm_queue->head);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+
+ if (empty)
+ return;
+
+ if (++i == SINGLE_POLL_TIMEOUT) {
+ cl_single_cfm_poll_timeout(cl_hw, queue_idx);
+ return;
+ }
+
+ msleep(20);
+ }
+}
+
+static bool cl_list_hp_empty_sta(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ struct cl_single_cfm_queue *hp_cfm_queue = &cl_hw->single_cfm_queues[HIGH_PRIORITY_QUEUE];
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+
+ list_for_each_entry(sw_txhdr, &hp_cfm_queue->head, cfm_list)
+ if (sw_txhdr->sta_idx == sta_idx)
+ return false;
+
+ return true;
+}
+
+static void cl_single_cfm_poll_empty_hp(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ bool empty = false;
+ int i = 0;
+
+ if (test_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags))
+ return;
+
+ while (true) {
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ empty = cl_list_hp_empty_sta(cl_hw, sta_idx);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+
+ if (empty)
+ return;
+
+ if (++i == SINGLE_POLL_TIMEOUT) {
+ cl_single_cfm_poll_timeout(cl_hw, HIGH_PRIORITY_QUEUE);
+ return;
+ }
+
+ msleep(20);
+ }
+}
+
+void cl_single_cfm_poll_empty_sta(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ /*
+ * Poll all single queues belonging to this station, and poll all
+ * packets belonging to this station in the high priority queue.
+ */
+ u8 ac;
+ u16 queue_idx;
+
+ for (ac = 0; ac < AC_MAX; ac++) {
+ queue_idx = QUEUE_IDX(sta_idx, ac);
+ cl_single_cfm_poll_empty(cl_hw, queue_idx);
+ }
+
+ cl_single_cfm_poll_empty_hp(cl_hw, sta_idx);
+}
+
+void cl_single_cfm_clear_tim_bit_sta(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ u8 ac;
+ u16 queue_idx;
+ struct cl_tx_queue *tx_queue = NULL;
+
+ for (ac = 0; ac < AC_MAX; ac++) {
+ queue_idx = QUEUE_IDX(sta_idx, ac);
+ tx_queue = &cl_hw->tx_queues->single[queue_idx];
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ cl_enhanced_tim_clear_tx_single(cl_hw, queue_idx, tx_queue->hw_index,
+ false, tx_queue->cl_sta, tx_queue->tid);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+ }
+
+ tx_queue = &cl_hw->tx_queues->single[HIGH_PRIORITY_QUEUE];
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+ cl_enhanced_tim_clear_tx_single(cl_hw, HIGH_PRIORITY_QUEUE, tx_queue->hw_index,
+ false, tx_queue->cl_sta, tx_queue->tid);
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+}
+
+static int cl_sw_txhdr_init_pool(struct cl_hw *cl_hw, u16 sw_txhdr_pool)
+{
+ u16 i = 0;
+ u32 sw_txhdr_pool_size = sw_txhdr_pool * sizeof(struct cl_sw_txhdr);
+ struct cl_sw_txhdr *sw_txhdr;
+
+ INIT_LIST_HEAD(&cl_hw->head_sw_txhdr_pool);
+ spin_lock_init(&cl_hw->lock_sw_txhdr_pool);
+
+ for (i = 0; i < sw_txhdr_pool; i++) {
+ sw_txhdr = kzalloc(sizeof(*sw_txhdr), GFP_ATOMIC);
+
+ if (unlikely(!sw_txhdr)) {
+ cl_dbg_verbose(cl_hw, "sw_txhdr NULL\n");
+ return -1;
+ }
+
+ list_add(&sw_txhdr->list_pool, &cl_hw->head_sw_txhdr_pool);
+ }
+
+ cl_dbg_verbose(cl_hw, " - pool %u, size %u\n", sw_txhdr_pool, sw_txhdr_pool_size);
+
+ return 0;
+}
+
+static int cl_sw_txhdr_init_cache(struct cl_hw *cl_hw)
+{
+ char sw_txhdr_cache_name[MODULE_NAME_LEN + 32] = {0};
+
+ snprintf(sw_txhdr_cache_name, sizeof(sw_txhdr_cache_name),
+ "%s_sw_txhdr_cache", THIS_MODULE->name);
+
+ cl_hw->sw_txhdr_cache = kmem_cache_create(sw_txhdr_cache_name,
+ sizeof(struct cl_sw_txhdr),
+ 0,
+ (SLAB_HWCACHE_ALIGN | SLAB_PANIC),
+ NULL);
+
+ if (!cl_hw->sw_txhdr_cache) {
+ cl_dbg_verbose(cl_hw, "sw_txhdr_cache NULL\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+int cl_sw_txhdr_init(struct cl_hw *cl_hw)
+{
+ u16 sw_txhdr_pool = cl_hw->conf->ci_sw_txhdr_pool;
+
+ if (sw_txhdr_pool)
+ return cl_sw_txhdr_init_pool(cl_hw, sw_txhdr_pool);
+ else
+ return cl_sw_txhdr_init_cache(cl_hw);
+}
+
+static void cl_sw_txhdr_deinit_pool(struct cl_hw *cl_hw)
+{
+ struct cl_sw_txhdr *sw_txhdr, *tmp;
+
+ list_for_each_entry_safe(sw_txhdr, tmp, &cl_hw->head_sw_txhdr_pool, list_pool) {
+ list_del(&sw_txhdr->list_pool);
+ kfree(sw_txhdr);
+ }
+}
+
+static void cl_sw_txhdr_deinit_cache(struct cl_hw *cl_hw)
+{
+ kmem_cache_destroy(cl_hw->sw_txhdr_cache);
+}
+
+void cl_sw_txhdr_deinit(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_sw_txhdr_pool)
+ cl_sw_txhdr_deinit_pool(cl_hw);
+ else
+ cl_sw_txhdr_deinit_cache(cl_hw);
+}
+
+static inline struct cl_sw_txhdr *cl_sw_txhdr_alloc_pool(struct cl_hw *cl_hw)
+{
+ struct cl_sw_txhdr *sw_txhdr = NULL;
+
+ spin_lock_bh(&cl_hw->lock_sw_txhdr_pool);
+ sw_txhdr = list_first_entry_or_null(&cl_hw->head_sw_txhdr_pool,
+ struct cl_sw_txhdr, list_pool);
+
+ if (sw_txhdr) {
+ list_del(&sw_txhdr->list_pool);
+ spin_unlock_bh(&cl_hw->lock_sw_txhdr_pool);
+ return sw_txhdr;
+ }
+
+ spin_unlock_bh(&cl_hw->lock_sw_txhdr_pool);
+ return NULL;
+}
+
+static inline struct cl_sw_txhdr *cl_sw_txhdr_alloc_cache(struct cl_hw *cl_hw)
+{
+ return kmem_cache_alloc(cl_hw->sw_txhdr_cache, GFP_ATOMIC);
+}
+
+struct cl_sw_txhdr *cl_sw_txhdr_alloc(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_sw_txhdr_pool)
+ return cl_sw_txhdr_alloc_pool(cl_hw);
+ else
+ return cl_sw_txhdr_alloc_cache(cl_hw);
+}
+
+static inline void cl_sw_txhdr_free_pool(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ spin_lock_bh(&cl_hw->lock_sw_txhdr_pool);
+ list_add_tail(&sw_txhdr->list_pool, &cl_hw->head_sw_txhdr_pool);
+ spin_unlock_bh(&cl_hw->lock_sw_txhdr_pool);
+}
+
+static inline void cl_sw_txhdr_free_cache(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ kmem_cache_free(cl_hw->sw_txhdr_cache, sw_txhdr);
+}
+
+void cl_sw_txhdr_free(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ if (cl_hw->conf->ci_sw_txhdr_pool)
+ cl_sw_txhdr_free_pool(cl_hw, sw_txhdr);
+ else
+ cl_sw_txhdr_free_cache(cl_hw, sw_txhdr);
+}
+
+#define CL_AMSDU_HDR_LEN 14
+
+static bool cl_tx_amsdu_is_sw(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, u16 pkt_len)
+{
+ bool syn_rst_push = false;
+ bool tcp_ack = false;
+
+ if (cl_hw->conf->ci_tx_sw_amsdu_max_packets <= 1)
+ return false;
+
+ tcp_ack = cl_is_tcp_ack(skb, &syn_rst_push);
+
+ if (!tcp_ack || syn_rst_push)
+ return false;
+
+ if ((cl_wrs_api_get_tx_sta_data_rate(cl_sta) * cl_sta->ampdu_min_spacing) <=
+ (pkt_len << 3))
+ return false;
+
+ return true;
+}
+
+static int cl_tx_amsdu_anchor_set(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct sk_buff *skb, u8 tid)
+{
+ /*
+ * Packet length calculation in HW -
+ * Add 802.11 header (maximum possible size) instead if 802.3
+ * Add AMSDU header
+ * Add RFC1042 header (according to ether-type)
+ * Add IV and ICV (if there is encryption)
+ */
+ struct cl_amsdu_ctrl *amsdu_anchor = &cl_sta->amsdu_anchor[tid];
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_key_conf *key_conf = tx_info->control.hw_key;
+ u16 ethertype = (skb->data[12] << 8) | skb->data[13];
+ u16 pkt_len = skb->len + CL_WLAN_HEADER_MAX_SIZE;
+
+ if (key_conf)
+ pkt_len += (key_conf->iv_len + key_conf->icv_len);
+
+ if (ethertype >= ETH_P_802_3_MIN)
+ pkt_len += sizeof(rfc1042_header);
+
+ amsdu_anchor->rem_len = amsdu_anchor->max_len - pkt_len;
+ amsdu_anchor->packet_cnt = 1;
+ amsdu_anchor->is_sw_amsdu = cl_tx_amsdu_is_sw(cl_hw, cl_sta, skb, pkt_len);
+
+ return CL_AMSDU_ANCHOR_SET;
+}
+
+static void cl_tx_amsdu_anchor_umacdesc_update(struct txdesc *txdesc, u8 idx,
+ u16 len, dma_addr_t dma_addr,
+ bool is_padding)
+{
+ struct lmacapi *umacdesc = &txdesc->umacdesc;
+
+ umacdesc->packet_len[idx] = cpu_to_le16(len);
+ umacdesc->packet_addr[idx] = cpu_to_le32(dma_addr);
+ txdesc->host_info.packet_cnt++;
+
+ /* Update padding bit of current msdu sub-frame */
+ if (is_padding)
+ txdesc->host_info.host_padding |= BIT(idx);
+}
+
+static struct cl_amsdu_txhdr *cl_tx_amsdu_txhdr_alloc(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_amsdu_txhdr_pool) {
+ struct cl_amsdu_txhdr *amsdu_txhdr =
+ list_first_entry_or_null(&cl_hw->head_amsdu_txhdr_pool,
+ struct cl_amsdu_txhdr,
+ list_pool);
+
+ if (amsdu_txhdr) {
+ list_del(&amsdu_txhdr->list_pool);
+ return amsdu_txhdr;
+ }
+
+ return NULL;
+ } else {
+ return kmem_cache_alloc(cl_hw->amsdu_txhdr_cache, GFP_ATOMIC);
+ }
+}
+
+static void _cl_tx_amsdu_transfer_single(struct cl_hw *cl_hw,
+ struct sk_buff *skb,
+ struct cl_sta *cl_sta,
+ u8 tid)
+{
+ struct ieee80211_tx_info *tx_info;
+
+ tx_info = IEEE80211_SKB_CB(skb);
+ tx_info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+ tx_info->control.flags &= ~IEEE80211_TX_CTRL_AMSDU;
+
+ if (cl_tx_8023_to_wlan(cl_hw, skb, cl_sta, tid) == 0) {
+ cl_hw->tx_packet_cntr.transfer.agg_to_single++;
+ cl_tx_single(cl_hw, cl_sta, skb, false, false);
+ }
+}
+
+static void cl_tx_amsdu_set_sw_sub_amsdu_hdr(struct sk_buff *skb)
+{
+ u16 ethertype = (skb->data[12] << 8) | skb->data[13];
+ int rfc1042_len = 0;
+ void *data;
+ struct ethhdr *amsdu_hdr;
+
+ if (ethertype >= ETH_P_802_3_MIN)
+ rfc1042_len = sizeof(rfc1042_header);
+
+ data = skb_push(skb, rfc1042_len + 2);
+ memmove(data, data + rfc1042_len + 2, 2 * ETH_ALEN);
+
+ amsdu_hdr = (struct ethhdr *)data;
+ amsdu_hdr->h_proto = cpu_to_be16(skb->len - ETH_HLEN);
+
+ memcpy(data + ETH_HLEN, rfc1042_header, rfc1042_len);
+}
+
+static int cl_tx_amsdu_add_sw_amsdu_hdr(struct cl_hw *cl_hw,
+ struct cl_amsdu_ctrl *amsdu_anchor)
+{
+ struct cl_sw_txhdr *anchor_sw_txhdr = amsdu_anchor->sw_txhdr;
+ struct sk_buff *skb = anchor_sw_txhdr->skb;
+ struct cl_sta *cl_sta = anchor_sw_txhdr->cl_sta;
+ struct ieee80211_hdr hdr;
+ u16 ethertype = (skb->data[12] << 8) | skb->data[13];
+ u16 hdrlen = cl_tx_prepare_wlan_hdr(cl_hw, cl_sta, skb, &hdr);
+ int rfc1042_len = 0;
+ int head_need = 0;
+ u8 enc_len = cl_key_get_cipher_len(skb);
+ u16 qos_ctrl = anchor_sw_txhdr->tid | IEEE80211_QOS_CTL_A_MSDU_PRESENT;
+
+ if (!hdrlen)
+ return -EINVAL;
+
+ if (ethertype >= ETH_P_802_3_MIN)
+ rfc1042_len = sizeof(rfc1042_header);
+
+ amsdu_anchor->hdrlen = hdrlen;
+ head_need = hdrlen + enc_len + rfc1042_len - skb_headroom(skb);
+ if (head_need > 0) {
+ head_need = ((head_need + 3) & ~3);
+ if (pskb_expand_head(skb, head_need, 0, GFP_ATOMIC))
+ return -ENOMEM;
+ }
+
+ cl_tx_amsdu_set_sw_sub_amsdu_hdr(skb);
+
+ skb_push(skb, hdrlen + enc_len);
+ memcpy(skb->data, &hdr, hdrlen - 2);
+ memcpy(skb->data + hdrlen - 2, &qos_ctrl, 2);
+ skb_reset_mac_header(skb);
+ anchor_sw_txhdr->txdesc.e2w_natt_param.hdr_conv_enable = false;
+ anchor_sw_txhdr->hdr80211 = (struct ieee80211_hdr *)skb->data;
+
+ return 0;
+}
+
+static int cl_tx_amsdu_sw_aggregate(struct cl_hw *cl_hw,
+ struct cl_amsdu_ctrl *amsdu_anchor,
+ struct sk_buff *skb)
+{
+ struct cl_sw_txhdr *anchor_sw_txhdr = amsdu_anchor->sw_txhdr;
+ struct sk_buff *anchor_skb = anchor_sw_txhdr->skb;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(anchor_skb);
+ struct ieee80211_key_conf *key_conf = tx_info->control.hw_key;
+ u16 total_frame_len = 0;
+ struct cl_tx_queue *tx_queue = anchor_sw_txhdr->tx_queue;
+ int head_pad = 0;
+ int sub_pad = 0;
+ bool syn_rst_push = false;
+ bool tcp_ack = cl_is_tcp_ack(skb, &syn_rst_push);
+
+ /* Worst case: rfc1042(6) + ET(2) + pad(2) = 10 */
+ if (!tcp_ack ||
+ (skb_tailroom(anchor_skb) < (skb->len + 10))) {
+ if (tx_queue->num_packets == 1)
+ cl_txq_sched(cl_hw, tx_queue);
+ cl_tx_amsdu_anchor_init(amsdu_anchor);
+ return cl_tx_amsdu_anchor_set(cl_hw, anchor_sw_txhdr->cl_sta,
+ skb, anchor_sw_txhdr->tid);
+ }
+
+ if (amsdu_anchor->packet_cnt == 1 &&
+ cl_tx_amsdu_add_sw_amsdu_hdr(cl_hw, amsdu_anchor))
+ return CL_AMSDU_FAILED;
+
+ cl_tx_amsdu_set_sw_sub_amsdu_hdr(skb);
+ sub_pad = CL_SKB_DATA_ALIGN_PADS(anchor_skb->len -
+ amsdu_anchor->hdrlen);
+ memset(skb_push(skb, sub_pad), 0, sub_pad);
+ memcpy(skb_put(anchor_skb, skb->len), skb->data, skb->len);
+
+ kfree_skb(skb);
+ amsdu_anchor->packet_cnt++;
+ anchor_sw_txhdr->sw_amsdu_packet_cnt++;
+ head_pad = CL_SKB_DATA_ALIGN_PADS(anchor_skb->data);
+
+ if (head_pad) {
+ anchor_sw_txhdr->map_len = anchor_skb->len + head_pad;
+ anchor_sw_txhdr->txdesc.host_info.host_padding |= BIT(0);
+ } else {
+ anchor_sw_txhdr->map_len = anchor_skb->len;
+ anchor_sw_txhdr->txdesc.host_info.host_padding = 0;
+ }
+
+ total_frame_len = anchor_skb->len;
+ if (key_conf)
+ total_frame_len += key_conf->icv_len;
+
+ anchor_sw_txhdr->txdesc.umacdesc.packet_len[0] = cpu_to_le16(total_frame_len);
+
+ if (amsdu_anchor->packet_cnt == cl_hw->conf->ci_tx_sw_amsdu_max_packets ||
+ syn_rst_push) {
+ if (tx_queue->num_packets == 1)
+ cl_txq_sched(cl_hw, tx_queue);
+ cl_tx_amsdu_anchor_init(amsdu_anchor);
+ }
+
+ return CL_AMSDU_SUB_FRAME_SET;
+}
+
+void cl_tx_amsdu_anchor_init(struct cl_amsdu_ctrl *amsdu_anchor)
+{
+ amsdu_anchor->rem_len = amsdu_anchor->max_len;
+ amsdu_anchor->sw_txhdr = NULL;
+ amsdu_anchor->packet_cnt = 0;
+ amsdu_anchor->is_sw_amsdu = false;
+}
+
+void cl_tx_amsdu_anchor_reset(struct cl_amsdu_ctrl *amsdu_anchor)
+{
+ amsdu_anchor->sw_txhdr = NULL;
+ amsdu_anchor->rem_len = 0;
+ amsdu_anchor->max_len = 0;
+ amsdu_anchor->packet_cnt = 0;
+ amsdu_anchor->is_sw_amsdu = false;
+}
+
+void cl_tx_amsdu_set_max_len(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u8 tid)
+{
+ struct ieee80211_sta_vht_cap *vht_cap = &cl_sta->sta->vht_cap;
+ struct cl_amsdu_ctrl *amsdu_anchor = &cl_sta->amsdu_anchor[tid];
+ u32 length = U32_MAX;
+
+ amsdu_anchor->max_len = 3839;
+
+ if (cl_band_is_6g(cl_hw)) {
+ u16 capa = le16_to_cpu(cl_sta->sta->he_6ghz_capa.capa);
+
+ length = (capa & IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN) >>
+ HE_6GHZ_CAP_MAX_MPDU_LEN_OFFSET;
+ } else if (vht_cap->vht_supported) {
+ length = vht_cap->cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK;
+ }
+
+ switch (length) {
+ case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895:
+ amsdu_anchor->max_len = 3895;
+ break;
+ case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991:
+ amsdu_anchor->max_len = 7991;
+ break;
+ case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454:
+ amsdu_anchor->max_len = 11454;
+ break;
+ default:
+ break;
+ }
+
+ amsdu_anchor->rem_len = amsdu_anchor->max_len;
+
+ cl_dbg_trace(cl_hw, "AMSDU supported - sta_idx=%u, max_len=%d\n",
+ cl_sta->sta_idx, amsdu_anchor->max_len);
+}
+
+void cl_tx_amsdu_first_sub_frame(struct cl_sw_txhdr *sw_txhdr, struct cl_sta *cl_sta,
+ struct sk_buff *skb, u8 tid)
+{
+ /* Set the anchor sw_txhdr */
+ cl_sta->amsdu_anchor[tid].sw_txhdr = sw_txhdr;
+
+ INIT_LIST_HEAD(&sw_txhdr->amsdu_txhdr.list);
+ sw_txhdr->amsdu_txhdr.skb = skb;
+}
+
+void cl_tx_amsdu_flush_sub_frames(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ struct cl_amsdu_txhdr *amsdu_txhdr = NULL, *tmp = NULL;
+ struct sk_buff *sub_skb = NULL;
+
+ /* Free mid & last AMSDU sub frames */
+ list_for_each_entry_safe(amsdu_txhdr, tmp, &sw_txhdr->amsdu_txhdr.list, list) {
+ sub_skb = amsdu_txhdr->skb;
+ list_del(&amsdu_txhdr->list);
+
+ dma_unmap_single(cl_hw->chip->dev, amsdu_txhdr->dma_addr,
+ (size_t)sub_skb->len, DMA_TO_DEVICE);
+ kfree_skb(sub_skb);
+ cl_tx_amsdu_txhdr_free(cl_hw, amsdu_txhdr);
+ cl_hw->tx_packet_cntr.drop.queue_flush++;
+ sw_txhdr->cl_vif->trfc_cntrs[sw_txhdr->ac].tx_dropped++;
+ }
+
+ /* Free first AMSDU sub frame */
+ kfree_skb(sw_txhdr->skb);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+}
+
+void cl_tx_amsdu_transfer_single(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ /*
+ * Transfer all skbs in sw_txhdr to a temporary list, free sw_txhdr,
+ * and then push the temporary list to the single path.
+ */
+ struct cl_amsdu_txhdr *amsdu_txhdr, *tmp;
+ struct sk_buff *skb;
+ struct cl_sta *cl_sta = sw_txhdr->cl_sta;
+ u8 tid = sw_txhdr->tid;
+
+ /* Transfer first AMSDU sub frame */
+ _cl_tx_amsdu_transfer_single(cl_hw, sw_txhdr->skb, cl_sta, tid);
+
+ /* Transfer mid & last AMSDU sub frames */
+ list_for_each_entry_safe(amsdu_txhdr, tmp, &sw_txhdr->amsdu_txhdr.list, list) {
+ skb = amsdu_txhdr->skb;
+
+ list_del(&amsdu_txhdr->list);
+ dma_unmap_single(cl_hw->chip->dev, amsdu_txhdr->dma_addr,
+ (size_t)skb->len, DMA_TO_DEVICE);
+ cl_tx_amsdu_txhdr_free(cl_hw, amsdu_txhdr);
+
+ _cl_tx_amsdu_transfer_single(cl_hw, skb, cl_sta, tid);
+ }
+}
+
+int cl_tx_amsdu_set(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct sk_buff *skb, u8 tid)
+{
+ struct cl_amsdu_ctrl *amsdu_anchor = &cl_sta->amsdu_anchor[tid];
+ struct cl_sw_txhdr *anchor_sw_txhdr = amsdu_anchor->sw_txhdr;
+ u16 packet_len = skb->len;
+ u8 packet_cnt;
+ bool is_mesh = ieee80211_vif_is_mesh(cl_sta->cl_vif->vif);
+ u8 packet_cnt_max = cl_hw->txamsdu_en;
+
+ /* Check if anchor exist */
+ if (!anchor_sw_txhdr) {
+ /* Sanity check - skb len < amsdu_max_len */
+ if (unlikely(packet_len > amsdu_anchor->max_len) || is_mesh)
+ return CL_AMSDU_SKIP;
+ else
+ return cl_tx_amsdu_anchor_set(cl_hw, cl_sta, skb, tid);
+ }
+
+ if (amsdu_anchor->is_sw_amsdu)
+ return cl_tx_amsdu_sw_aggregate(cl_hw, amsdu_anchor, skb);
+
+ /*
+ * 1. Check if there is enough space in AMSDU
+ * 2. Check if A-MSDU packet count is less than maximum.
+ */
+ packet_cnt = amsdu_anchor->packet_cnt;
+
+ if (amsdu_anchor->rem_len > packet_len &&
+ packet_cnt < packet_cnt_max &&
+ !is_mesh) {
+ struct cl_amsdu_txhdr *amsdu_txhdr = NULL;
+ u8 hdr_pads = CL_SKB_DATA_ALIGN_PADS(skb->data);
+ u16 ethertype = (skb->data[12] << 8) | skb->data[13];
+ u16 total_packet_len = packet_len + hdr_pads;
+ u16 curr_amsdu_len = amsdu_anchor->max_len - amsdu_anchor->rem_len;
+ dma_addr_t dma_addr;
+
+ if (ethertype >= ETH_P_802_3_MIN)
+ total_packet_len += sizeof(rfc1042_header);
+
+ /*
+ * High number of MSDUs in AMSDU can cause underrun in the
+ * E2W module.
+ * Therefore, host is required to set Num MSDU in AMSDU using
+ * the following rules
+ *
+ * AMSDU Length AMSDU agg size
+ * len < 4*256 3 or less
+ * len >= 4*256 4 or less
+ * len >= 5*256 5 or less
+ * len >= 6*256 6 or less
+ * len >= 7*256 7 or less
+ * len >= 8*256 8 or less
+ */
+ if (packet_cnt >= CL_AMSDU_MIN_AGG_SIZE) {
+ u16 new_amsdu_len = curr_amsdu_len + packet_len;
+
+ if (new_amsdu_len < ((packet_cnt + 1) * CL_AMSDU_CONST_LEN))
+ return cl_tx_amsdu_anchor_set(cl_hw, cl_sta, skb, tid);
+ }
+
+ amsdu_txhdr = cl_tx_amsdu_txhdr_alloc(cl_hw);
+ if (unlikely(!amsdu_txhdr)) {
+ kfree_skb(skb);
+ cl_dbg_err(cl_hw, "AMSDU FAILED to alloc amsdu txhdr\n");
+ cl_hw->tx_packet_cntr.drop.amsdu_alloc_fail++;
+ cl_sta->cl_vif->trfc_cntrs[anchor_sw_txhdr->ac].tx_errors++;
+ return CL_AMSDU_FAILED;
+ }
+
+ amsdu_txhdr->skb = skb;
+ list_add_tail(&amsdu_txhdr->list, &anchor_sw_txhdr->amsdu_txhdr.list);
+
+ /* Update anchor fields */
+ amsdu_anchor->rem_len -= total_packet_len;
+ amsdu_anchor->packet_cnt++;
+
+ /* Get DMA address for skb */
+ dma_addr = dma_map_single(cl_hw->chip->dev, (u8 *)skb->data - hdr_pads,
+ packet_len + hdr_pads, DMA_TO_DEVICE);
+ if (WARN_ON(dma_mapping_error(cl_hw->chip->dev, dma_addr))) {
+ kfree_skb(skb);
+ cl_tx_amsdu_txhdr_free(cl_hw, amsdu_txhdr);
+ cl_dbg_err(cl_hw, "dma_mapping_error\n");
+ cl_hw->tx_packet_cntr.drop.amsdu_dma_map_err++;
+ cl_sta->cl_vif->trfc_cntrs[anchor_sw_txhdr->ac].tx_errors++;
+ return CL_AMSDU_FAILED;
+ }
+
+ /* Add AMSDU HDR len of the first packet */
+ if (amsdu_anchor->packet_cnt == 2)
+ total_packet_len += CL_AMSDU_HDR_LEN;
+
+ amsdu_txhdr->dma_addr = dma_addr;
+
+ /* Update sw_txhdr packet_len, packet_addr, packet_cnt fields */
+ cl_tx_amsdu_anchor_umacdesc_update(&anchor_sw_txhdr->txdesc, packet_cnt,
+ packet_len, dma_addr, hdr_pads);
+
+ /* If we reached max AMSDU payload count, mark anchor as NULL */
+ if (amsdu_anchor->packet_cnt >= packet_cnt_max)
+ cl_tx_amsdu_anchor_init(amsdu_anchor);
+
+ return CL_AMSDU_SUB_FRAME_SET;
+ }
+ /* Not enough space remain, set new anchor length is ok */
+ if (unlikely(packet_len > amsdu_anchor->max_len) || is_mesh) {
+ cl_tx_amsdu_anchor_init(amsdu_anchor);
+ return CL_AMSDU_SKIP;
+ } else {
+ return cl_tx_amsdu_anchor_set(cl_hw, cl_sta, skb, tid);
+ }
+}
+
+void cl_tx_amsdu_unset(struct cl_sw_txhdr *sw_txhdr)
+{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+
+ tx_info->control.flags &= ~IEEE80211_TX_CTRL_AMSDU;
+
+ sw_txhdr->txdesc.e2w_txhdr_param.qos_ctrl &=
+ ~cpu_to_le16(IEEE80211_QOS_CTL_A_MSDU_PRESENT);
+}
+
+/*
+ * Two options for allocating cl_amsdu_txhdr:
+ * 1) pool
+ * 2) cache
+ */
+
+static int cl_tx_amsdu_txhdr_init_pool(struct cl_hw *cl_hw)
+{
+ u16 amsdu_txhdr_pool = cl_hw->conf->ci_amsdu_txhdr_pool;
+ u32 i = 0;
+ u32 amsdu_txhdr_pool_size = amsdu_txhdr_pool * sizeof(struct cl_amsdu_txhdr);
+ struct cl_amsdu_txhdr *amsdu_txhdr;
+
+ INIT_LIST_HEAD(&cl_hw->head_amsdu_txhdr_pool);
+
+ for (i = 0; i < amsdu_txhdr_pool; i++) {
+ amsdu_txhdr = kzalloc(sizeof(*amsdu_txhdr), GFP_ATOMIC);
+
+ if (unlikely(!amsdu_txhdr)) {
+ cl_dbg_err(cl_hw, "amsdu_txhdr NULL\n");
+ return -1;
+ }
+
+ list_add(&amsdu_txhdr->list_pool, &cl_hw->head_amsdu_txhdr_pool);
+ }
+
+ cl_dbg_verbose(cl_hw, " - pool %u, size %u\n",
+ amsdu_txhdr_pool, amsdu_txhdr_pool_size);
+
+ return 0;
+}
+
+static int cl_tx_amsdu_txhdr_init_cache(struct cl_hw *cl_hw)
+{
+ char amsdu_txhdr_cache_name[MODULE_NAME_LEN + 32] = {0};
+
+ snprintf(amsdu_txhdr_cache_name, sizeof(amsdu_txhdr_cache_name),
+ "%s_amsdu_txhdr_cache", THIS_MODULE->name);
+
+ cl_hw->amsdu_txhdr_cache = kmem_cache_create(amsdu_txhdr_cache_name,
+ sizeof(struct cl_amsdu_txhdr),
+ 0,
+ (SLAB_HWCACHE_ALIGN | SLAB_PANIC),
+ NULL);
+
+ if (!cl_hw->amsdu_txhdr_cache) {
+ cl_dbg_err(cl_hw, "amsdu_txhdr_cache NULL\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+int cl_tx_amsdu_txhdr_init(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_amsdu_txhdr_pool)
+ return cl_tx_amsdu_txhdr_init_pool(cl_hw);
+ else
+ return cl_tx_amsdu_txhdr_init_cache(cl_hw);
+}
+
+static void cl_tx_amsdu_txhdr_deinit_pool(struct cl_hw *cl_hw)
+{
+ struct cl_amsdu_txhdr *amsdu_txhdr, *tmp;
+
+ list_for_each_entry_safe(amsdu_txhdr, tmp, &cl_hw->head_amsdu_txhdr_pool, list_pool) {
+ list_del(&amsdu_txhdr->list_pool);
+ kfree(amsdu_txhdr);
+ }
+}
+
+static void cl_tx_amsdu_txhdr_deinit_cache(struct cl_hw *cl_hw)
+{
+ kmem_cache_destroy(cl_hw->amsdu_txhdr_cache);
+}
+
+void cl_tx_amsdu_txhdr_deinit(struct cl_hw *cl_hw)
+{
+ if (cl_hw->conf->ci_amsdu_txhdr_pool)
+ cl_tx_amsdu_txhdr_deinit_pool(cl_hw);
+ else
+ cl_tx_amsdu_txhdr_deinit_cache(cl_hw);
+}
+
+void cl_tx_amsdu_txhdr_free(struct cl_hw *cl_hw, struct cl_amsdu_txhdr *amsdu_txhdr)
+{
+ if (cl_hw->conf->ci_amsdu_txhdr_pool)
+ list_add_tail(&amsdu_txhdr->list_pool, &cl_hw->head_amsdu_txhdr_pool);
+ else
+ kmem_cache_free(cl_hw->amsdu_txhdr_cache, amsdu_txhdr);
+}
+
+static const u8 cl_tid2hwq[IEEE80211_NUM_TIDS] = {
+ CL_HWQ_BE,
+ CL_HWQ_BK,
+ CL_HWQ_BK,
+ CL_HWQ_BE,
+ CL_HWQ_VI,
+ CL_HWQ_VI,
+ CL_HWQ_VO,
+ CL_HWQ_VO,
+ /* At the moment, all others TID are mapped to BE */
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+ CL_HWQ_BE,
+};
+
+static void cl_txq_sched_list_add(struct cl_tx_queue *tx_queue, struct cl_hw *cl_hw)
+{
+ /* Add to schedule queue */
+ if (tx_queue->sched)
+ return;
+
+ tx_queue->sched = true;
+ if (tx_queue->type == QUEUE_TYPE_AGG)
+ list_add_tail(&tx_queue->sched_list, &cl_hw->list_sched_q_agg);
+ else
+ list_add_tail(&tx_queue->sched_list, &cl_hw->list_sched_q_single);
+}
+
+static void cl_txq_sched_list_remove(struct cl_tx_queue *tx_queue)
+{
+ /* Remove from schedule queue */
+ if (tx_queue->sched) {
+ tx_queue->sched = false;
+ list_del(&tx_queue->sched_list);
+ }
+}
+
+static void cl_txq_sched_list_remove_if_empty(struct cl_tx_queue *tx_queue)
+{
+ /* If queue is empty remove it from schedule list */
+ if (list_empty(&tx_queue->hdrs))
+ cl_txq_sched_list_remove(tx_queue);
+}
+
+static void cl_txq_transfer_single_to_agg(struct cl_hw *cl_hw,
+ struct cl_tx_queue *single_queue,
+ struct cl_tx_queue *agg_queue, u8 tid)
+{
+ struct cl_sw_txhdr *sw_txhdr, *sw_txhdr_tmp;
+ struct ieee80211_tx_info *tx_info;
+ struct sk_buff *skb;
+ u8 hdr_pads;
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+
+ if (single_queue->num_packets == 0)
+ goto out;
+
+ list_for_each_entry_safe(sw_txhdr, sw_txhdr_tmp, &single_queue->hdrs, tx_queue_list) {
+ if (sw_txhdr->tid != tid)
+ continue;
+
+ if (!ieee80211_is_data_qos(sw_txhdr->fc))
+ continue;
+
+ cl_hw->tx_packet_cntr.transfer.single_to_agg++;
+
+ /* Remove from single queue */
+ list_del(&sw_txhdr->tx_queue_list);
+
+ /* Update single queue counters */
+ single_queue->num_packets--;
+ single_queue->total_packets--;
+
+ /* Turn on AMPDU flag */
+ skb = sw_txhdr->skb;
+ tx_info = IEEE80211_SKB_CB(skb);
+ tx_info->flags |= IEEE80211_TX_CTL_AMPDU;
+
+ /* Convert header back and Push skb to agg queue */
+ cl_tx_wlan_to_8023(skb);
+ hdr_pads = CL_SKB_DATA_ALIGN_PADS(skb->data);
+ cl_tx_agg_prep(cl_hw, sw_txhdr, skb->len, hdr_pads, true);
+ agg_queue->total_packets++;
+ sw_txhdr->hdr80211 = NULL;
+ sw_txhdr->tx_queue = agg_queue;
+ cl_txq_push(cl_hw, sw_txhdr);
+
+ /* Schedule tasklet to try and empty the queue */
+ tasklet_schedule(&cl_hw->tx_task);
+ }
+
+ /* If single queue is empty remove it from schedule list */
+ cl_txq_sched_list_remove_if_empty(single_queue);
+
+out:
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+}
+
+static void cl_txq_delete_packets(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue, u8 sta_idx)
+{
+ struct cl_sw_txhdr *sw_txhdr, *sw_txhdr_tmp;
+
+ list_for_each_entry_safe(sw_txhdr, sw_txhdr_tmp, &tx_queue->hdrs, tx_queue_list) {
+ /*
+ * Brodcast frames do not have cl_sta and should not be
+ * deleted at station remove sequence.
+ */
+ if (!sw_txhdr->cl_sta)
+ continue;
+
+ if (sw_txhdr->sta_idx != sta_idx)
+ continue;
+
+ list_del(&sw_txhdr->tx_queue_list);
+ tx_queue->num_packets--;
+
+ cl_tx_single_free_skb(cl_hw, sw_txhdr->skb);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ }
+
+ /* If queue is empty remove it from schedule list */
+ cl_txq_sched_list_remove_if_empty(tx_queue);
+}
+
+static void cl_txq_reset_counters(struct cl_tx_queue *tx_queue)
+{
+ tx_queue->total_fw_push_desc = 0;
+ tx_queue->total_fw_push_skb = 0;
+ tx_queue->total_fw_cfm = 0;
+ tx_queue->total_packets = 0;
+ tx_queue->dump_queue_full = 0;
+ tx_queue->dump_dma_map_fail = 0;
+
+ memset(tx_queue->stats_hw_amsdu_cnt, 0,
+ sizeof(tx_queue->stats_hw_amsdu_cnt));
+
+ memset(tx_queue->stats_sw_amsdu_cnt, 0,
+ sizeof(tx_queue->stats_sw_amsdu_cnt));
+}
+
+static void cl_txq_flush(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue)
+{
+ struct cl_sw_txhdr *sw_txhdr, *sw_txhdr_tmp;
+ struct ieee80211_tx_info *tx_info;
+
+ if (tx_queue->num_packets == 0)
+ return;
+
+ list_for_each_entry_safe(sw_txhdr, sw_txhdr_tmp, &tx_queue->hdrs, tx_queue_list) {
+ list_del(&sw_txhdr->tx_queue_list);
+ tx_queue->num_packets--;
+
+ /* Can not send AMSDU frames as singles */
+ tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+
+ /* Free mid & last AMSDU sub frames */
+ if (cl_tx_ctrl_is_amsdu(tx_info)) {
+ cl_tx_amsdu_flush_sub_frames(cl_hw, sw_txhdr);
+ } else {
+ if (tx_queue->type == QUEUE_TYPE_SINGLE)
+ cl_tx_single_free_skb(cl_hw, sw_txhdr->skb);
+ else
+ kfree_skb(sw_txhdr->skb);
+
+ cl_hw->tx_packet_cntr.drop.queue_flush++;
+ sw_txhdr->cl_vif->trfc_cntrs[sw_txhdr->ac].tx_dropped++;
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ }
+ }
+
+ /* Remove from schedule queue */
+ cl_txq_sched_list_remove(tx_queue);
+
+ /* Sanity check that queue is empty */
+ WARN_ON(tx_queue->num_packets > 0);
+}
+
+static void cl_txq_agg_size_set(struct cl_hw *cl_hw)
+{
+ struct cl_tx_queue *tx_queue = NULL;
+ u16 new_size = 0;
+ u16 drv_max_size = 0;
+ int i = 0;
+ int j = 0;
+
+ if (!cl_hw->used_agg_queues || !cl_hw->conf->ci_tx_packet_limit)
+ return;
+
+ new_size = cl_hw->conf->ci_tx_packet_limit / cl_hw->used_agg_queues;
+ drv_max_size = max(new_size, cl_hw->conf->ci_tx_queue_size_agg);
+
+ for (i = 0; i < IPC_MAX_BA_SESSIONS; i++) {
+ tx_queue = &cl_hw->tx_queues->agg[i];
+
+ if (!tx_queue->cl_sta)
+ continue;
+
+ tx_queue->max_packets = drv_max_size;
+
+ j++;
+ if (j == cl_hw->used_agg_queues)
+ break;
+ }
+
+ cl_dbg_trace(cl_hw, "drv_max_size = %u\n", drv_max_size);
+}
+
+static int cl_txq_request_find(struct cl_hw *cl_hw, u8 sta_idx, u8 tid)
+{
+ int i = 0;
+ struct cl_req_agg_db *req_agg_db = NULL;
+ u8 req_agg_queues = 0;
+
+ for (i = 0; (i < IPC_MAX_BA_SESSIONS) && (req_agg_queues < cl_hw->req_agg_queues); i++) {
+ req_agg_db = &cl_hw->req_agg_db[i];
+
+ if (!req_agg_db->is_used)
+ continue;
+
+ req_agg_queues++;
+
+ if (sta_idx == req_agg_db->sta_idx && tid == req_agg_db->tid)
+ return i;
+ }
+
+ return -1;
+}
+
+static void cl_txq_push_cntrs_update(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue,
+ u32 orig_drv_cnt, u32 orig_fw_cnt)
+{
+ u32 total_pushed;
+ u32 idx;
+
+ if (!cl_hw->conf->ci_tx_push_cntrs_stat_en)
+ return;
+
+ total_pushed = orig_drv_cnt - tx_queue->num_packets;
+ idx = tx_queue->push_cntrs_db.tx_push_logger_idx % TX_PUSH_LOGGER_SIZE;
+ tx_queue->push_cntrs_db.tx_push_cntr_hist[total_pushed]++;
+ tx_queue->push_cntrs_db.tx_push_logger[idx][TX_PUSH_LOGGER_DRV_CNT] = orig_drv_cnt;
+ tx_queue->push_cntrs_db.tx_push_logger[idx][TX_PUSH_LOGGER_FW_CNT] = orig_fw_cnt;
+ tx_queue->push_cntrs_db.tx_push_logger[idx][TX_PUSH_LOGGER_PKT_PUSHED] = total_pushed;
+ ++tx_queue->push_cntrs_db.tx_push_logger_idx;
+}
+
+static void cl_txq_task_single(struct cl_hw *cl_hw)
+{
+ /* Schedule single queues */
+ struct cl_tx_queue *tx_queue, *tx_queue_tmp;
+
+ spin_lock(&cl_hw->tx_lock_single);
+
+ list_for_each_entry_safe(tx_queue, tx_queue_tmp, &cl_hw->list_sched_q_single, sched_list)
+ cl_txq_sched(cl_hw, tx_queue);
+
+ /* Rotate the queue so next schedule will start with a different queue */
+ list_rotate_left(&cl_hw->list_sched_q_single);
+
+ spin_unlock(&cl_hw->tx_lock_single);
+}
+
+static void cl_txq_task_agg(struct cl_hw *cl_hw)
+{
+ /* Schedule agg queueus */
+ struct cl_tx_queue *tx_queue, *tx_queue_tmp;
+
+ spin_lock(&cl_hw->tx_lock_agg);
+
+ list_for_each_entry_safe(tx_queue, tx_queue_tmp, &cl_hw->list_sched_q_agg, sched_list)
+ cl_txq_sched(cl_hw, tx_queue);
+
+ /* Rotate the queue so next schedule will start with a different queue */
+ list_rotate_left(&cl_hw->list_sched_q_agg);
+
+ spin_unlock(&cl_hw->tx_lock_agg);
+}
+
+static void cl_txq_task(unsigned long data)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)data;
+
+ cl_txq_task_single(cl_hw);
+ cl_txq_task_agg(cl_hw);
+}
+
+static void cl_txq_agg_inc_usage_cntr(struct cl_hw *cl_hw)
+{
+ /* Should be called in cl_hw->tx_lock_agg context */
+ cl_hw->used_agg_queues++;
+ cl_txq_agg_size_set(cl_hw);
+}
+
+static void cl_txq_agg_dec_usage_cntr(struct cl_hw *cl_hw)
+{
+ /* Should be called in cl_hw->tx_lock_agg context */
+ WARN_ON_ONCE(cl_hw->used_agg_queues == 0);
+
+ cl_hw->used_agg_queues--;
+ cl_txq_agg_size_set(cl_hw);
+}
+
+static void cl_txq_init_single(struct cl_hw *cl_hw)
+{
+ struct cl_tx_queue *tx_queue;
+ u32 i;
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+
+ INIT_LIST_HEAD(&cl_hw->list_sched_q_single);
+
+ for (i = 0; i < MAX_SINGLE_QUEUES; i++) {
+ tx_queue = &cl_hw->tx_queues->single[i];
+ memset(tx_queue, 0, sizeof(struct cl_tx_queue));
+ INIT_LIST_HEAD(&tx_queue->hdrs);
+ tx_queue->hw_index = i / FW_MAX_NUM_STA;
+ tx_queue->fw_max_size = IPC_TXDESC_CNT_SINGLE;
+ tx_queue->fw_free_space = IPC_TXDESC_CNT_SINGLE;
+ tx_queue->index = i;
+ tx_queue->max_packets = cl_hw->conf->ci_tx_queue_size_single;
+ tx_queue->type = QUEUE_TYPE_SINGLE;
+ }
+
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+}
+
+static void cl_txq_init_bcmc(struct cl_hw *cl_hw)
+{
+ struct cl_tx_queue *tx_queue;
+
+ spin_lock_bh(&cl_hw->tx_lock_bcmc);
+
+ tx_queue = &cl_hw->tx_queues->bcmc;
+ memset(tx_queue, 0, sizeof(struct cl_tx_queue));
+ INIT_LIST_HEAD(&tx_queue->hdrs);
+ tx_queue->hw_index = CL_HWQ_BCN;
+ tx_queue->fw_max_size = IPC_TXDESC_CNT_BCMC;
+ tx_queue->fw_free_space = IPC_TXDESC_CNT_BCMC;
+ tx_queue->index = 0;
+ tx_queue->max_packets = 0;
+ tx_queue->type = QUEUE_TYPE_BCMC;
+
+ spin_unlock_bh(&cl_hw->tx_lock_bcmc);
+}
+
+static void cl_txq_init_agg(struct cl_hw *cl_hw)
+{
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+ INIT_LIST_HEAD(&cl_hw->list_sched_q_agg);
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+}
+
+static void cl_txq_agg_request_reset(struct cl_hw *cl_hw)
+{
+ cl_hw->req_agg_queues = 0;
+ memset(cl_hw->req_agg_db, 0, sizeof(cl_hw->req_agg_db));
+}
+
+void cl_txq_init(struct cl_hw *cl_hw)
+{
+ tasklet_init(&cl_hw->tx_task, cl_txq_task, (unsigned long)cl_hw);
+
+ cl_txq_agg_request_reset(cl_hw);
+ cl_txq_init_single(cl_hw);
+ cl_txq_init_bcmc(cl_hw);
+ cl_txq_init_agg(cl_hw);
+}
+
+void cl_txq_stop(struct cl_hw *cl_hw)
+{
+ tasklet_kill(&cl_hw->tx_task);
+}
+
+struct cl_tx_queue *cl_txq_get(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ struct cl_sta *cl_sta = sw_txhdr->cl_sta;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(sw_txhdr->skb);
+ u8 hw_queue = sw_txhdr->hw_queue;
+
+ if (!cl_sta &&
+ hw_queue == CL_HWQ_VO &&
+ is_multicast_ether_addr(sw_txhdr->hdr80211->addr1)) {
+ /*
+ * If HW queue is VO and packet is multicast, it was not buffered
+ * by mac80211, and it should be pushed to the high-priority queue
+ * and not to the bcmc queue.
+ */
+ return &cl_hw->tx_queues->single[HIGH_PRIORITY_QUEUE];
+ } else if (!cl_sta &&
+ (hw_queue != CL_HWQ_BCN) &&
+ !(tx_info->flags & IEEE80211_TX_CTL_NO_PS_BUFFER)) {
+ /*
+ * If station is NULL, but HW queue is not BCN,
+ * it most go to the high-priority queue.
+ */
+ tx_info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER;
+ sw_txhdr->hw_queue = CL_HWQ_VO;
+ return &cl_hw->tx_queues->single[HIGH_PRIORITY_QUEUE];
+ } else if (cl_sta && (tx_info->flags & IEEE80211_TX_CTL_AMPDU)) {
+ /* Agg packet */
+ return cl_sta->agg_tx_queues[sw_txhdr->tid];
+ } else if (hw_queue == CL_HWQ_BCN) {
+ return &cl_hw->tx_queues->bcmc;
+ } else if (tx_info->flags & IEEE80211_TX_CTL_NO_PS_BUFFER) {
+ /*
+ * Only frames that are power save response or non-bufferable MMPDU
+ * will have this flag set our driver will push those frame to the
+ * highiest priority queue.
+ */
+ return &cl_hw->tx_queues->single[HIGH_PRIORITY_QUEUE];
+ }
+
+ return &cl_hw->tx_queues->single[QUEUE_IDX(sw_txhdr->sta_idx, hw_queue)];
+}
+
+void cl_txq_push(struct cl_hw *cl_hw, struct cl_sw_txhdr *sw_txhdr)
+{
+ struct cl_tx_queue *tx_queue = sw_txhdr->tx_queue;
+
+ if (tx_queue->num_packets < tx_queue->max_packets) {
+ tx_queue->num_packets++;
+
+ /*
+ * This prioritization of action frames helps Samsung Galaxy Note 8 to
+ * open BA session more easily, when phy dev is PHY_DEV_OLYMPUS
+ * and helps open BA on all system emulators
+ */
+ if (ieee80211_is_action(sw_txhdr->fc) && !IS_REAL_PHY(cl_hw->chip))
+ list_add(&sw_txhdr->tx_queue_list, &tx_queue->hdrs);
+ else
+ list_add_tail(&sw_txhdr->tx_queue_list, &tx_queue->hdrs);
+
+ /* If it is the first packet in the queue, add the queue to the sched list */
+ cl_txq_sched_list_add(tx_queue, cl_hw);
+ } else {
+ struct cl_sta *cl_sta = sw_txhdr->cl_sta;
+ u8 tid = sw_txhdr->tid;
+
+ /* If the SW queue full, release the packet */
+ tx_queue->dump_queue_full++;
+
+ if (cl_sta && cl_sta->amsdu_anchor[tid].sw_txhdr) {
+ if (cl_sta->amsdu_anchor[tid].sw_txhdr == sw_txhdr) {
+ cl_sta->amsdu_anchor[tid].sw_txhdr = NULL;
+ cl_sta->amsdu_anchor[tid].packet_cnt = 0;
+ }
+ }
+
+ dev_kfree_skb_any(sw_txhdr->skb);
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+
+ /* Schedule tasklet to try and empty the queue */
+ tasklet_schedule(&cl_hw->tx_task);
+ }
+}
+
+void cl_txq_sched(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue)
+{
+ struct cl_sw_txhdr *sw_txhdr, *sw_txhdr_tmp;
+ u32 orig_drv_cnt = tx_queue->num_packets;
+ u32 orig_fw_cnt = cl_txq_desc_in_fw(tx_queue);
+ struct cl_sta *cl_sta = tx_queue->cl_sta;
+
+ if (!test_bit(CL_DEV_STARTED, &cl_hw->drv_flags) ||
+ cl_hw->tx_disable_flags ||
+ cl_txq_is_fw_full(tx_queue) ||
+ (cl_sta && cl_sta->pause_tx))
+ return;
+
+ /* Go over all descriptors in queue */
+ list_for_each_entry_safe(sw_txhdr, sw_txhdr_tmp, &tx_queue->hdrs, tx_queue_list) {
+ list_del(&sw_txhdr->tx_queue_list);
+ tx_queue->num_packets--;
+
+ if (cl_hw->tx_db.force_amsdu &&
+ sw_txhdr->txdesc.host_info.packet_cnt < cl_hw->txamsdu_en)
+ break;
+
+ cl_tx_push(cl_hw, sw_txhdr, tx_queue);
+
+ if (cl_txq_is_fw_full(tx_queue))
+ break;
+ }
+
+ cl_txq_push_cntrs_update(cl_hw, tx_queue, orig_drv_cnt, orig_fw_cnt);
+
+ /* If queue is empty remove it from schedule list */
+ cl_txq_sched_list_remove_if_empty(tx_queue);
+}
+
+void cl_txq_agg_alloc(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct mm_ba_add_cfm *ba_add_cfm, u16 buf_size)
+{
+ u8 tid = ba_add_cfm->tid;
+ u8 fw_agg_idx = ba_add_cfm->agg_idx;
+ u8 sta_idx = cl_sta->sta_idx;
+ u8 ac = cl_tid2hwq[tid];
+ u16 single_queue_idx = QUEUE_IDX(sta_idx, ac);
+ struct cl_tx_queue *tx_queue = &cl_hw->tx_queues->agg[fw_agg_idx];
+
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+
+ /* Init aggregated queue struct */
+ memset(tx_queue, 0, sizeof(struct cl_tx_queue));
+ INIT_LIST_HEAD(&tx_queue->hdrs);
+
+ /*
+ * Firmware agg queues size is static and set to 512, so that for the worst
+ * case of HE stations,that support AMPDU of 256, it has room for two full
+ * aggregation.
+ * To keep this logic, of room for two aggregations, for non-HE stations, or
+ * for HE stations that do not support AMPDU of 256, we initialize fw_max_size
+ to twice the buffer size supported by the station.
+ */
+ tx_queue->fw_max_size = min_t(u16, TXDESC_AGG_Q_SIZE_MAX, buf_size * 2);
+ tx_queue->fw_free_space = tx_queue->fw_max_size;
+
+ tx_queue->max_packets = cl_hw->conf->ci_tx_queue_size_agg;
+ tx_queue->hw_index = ac;
+ tx_queue->cl_sta = cl_sta;
+ tx_queue->type = QUEUE_TYPE_AGG;
+ tx_queue->tid = tid;
+ tx_queue->index = fw_agg_idx;
+
+ /* Reset the synchronization counters between the fw and the IPC layer */
+ cl_hw->ipc_env->ring_indices_elem->indices->txdesc_write_idx.agg[fw_agg_idx] = 0;
+
+ /* Attach the cl_hw chosen queue to the station and agg queues DB */
+ cl_sta->agg_tx_queues[tid] = tx_queue;
+ cl_agg_cfm_set_tx_queue(cl_hw, tx_queue, fw_agg_idx);
+
+ /* Notify upper mac80211 layer of queues resources status */
+ cl_txq_agg_inc_usage_cntr(cl_hw);
+ cl_txq_agg_request_del(cl_hw, sta_idx, tid);
+
+ /*
+ * Move the qos descriptors to the new allocated aggregated queues,
+ * otherwise we might reorder packets)
+ */
+ cl_txq_transfer_single_to_agg(cl_hw, &cl_hw->tx_queues->single[single_queue_idx],
+ tx_queue, tid);
+ /* Move the BA window pending packets to agg path */
+ cl_baw_pending_to_agg(cl_hw, cl_sta, tid);
+
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+
+ cl_dbg_trace(cl_hw, "Allocate queue [%u] to station [%u] tid [%u]\n",
+ fw_agg_idx, sta_idx, tid);
+}
+
+void cl_txq_agg_free(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue,
+ struct cl_sta *cl_sta, u8 tid)
+{
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+
+ cl_dbg_trace(cl_hw, "Free queue [%u] of station [%u] tid [%u]\n",
+ tx_queue->index, cl_sta->sta_idx, tid);
+
+ memset(tx_queue, 0, sizeof(struct cl_tx_queue));
+
+ cl_txq_agg_dec_usage_cntr(cl_hw);
+
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+}
+
+void cl_txq_agg_stop(struct cl_sta *cl_sta, u8 tid)
+{
+ cl_sta->agg_tx_queues[tid] = NULL;
+}
+
+void cl_txq_sta_add(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ /* Set cl_sta field for all single queues of this station */
+ u8 ac;
+ u16 queue_idx;
+
+ for (ac = 0; ac < AC_MAX; ac++) {
+ queue_idx = QUEUE_IDX(cl_sta->sta_idx, ac);
+ cl_hw->tx_queues->single[queue_idx].cl_sta = cl_sta;
+ }
+
+ /* Reset pointers to TX agg queues */
+ memset(cl_sta->agg_tx_queues, 0, sizeof(cl_sta->agg_tx_queues));
+}
+
+void cl_txq_sta_remove(struct cl_hw *cl_hw, u8 sta_idx)
+{
+ /* Clear cl_sta field for all single queues of this station */
+ u8 ac;
+ u16 queue_idx;
+
+ for (ac = 0; ac < AC_MAX; ac++) {
+ queue_idx = QUEUE_IDX(sta_idx, ac);
+ cl_hw->tx_queues->single[queue_idx].cl_sta = NULL;
+ }
+}
+
+void cl_txq_transfer_agg_to_single(struct cl_hw *cl_hw, struct cl_tx_queue *agg_queue)
+{
+ /*
+ * 1) Remove from aggregation queue
+ * 2) Free sw_txhdr
+ * 3) Push to single queue
+ */
+ struct cl_sw_txhdr *sw_txhdr, *sw_txhdr_tmp;
+ struct sk_buff *skb;
+ struct ieee80211_tx_info *tx_info;
+ struct cl_tx_queue *single_queue;
+ struct cl_sta *cl_sta = agg_queue->cl_sta;
+ u16 single_queue_idx = 0;
+
+ if (agg_queue->num_packets == 0)
+ return;
+
+ single_queue_idx = QUEUE_IDX(cl_sta->sta_idx, agg_queue->hw_index);
+ single_queue = &cl_hw->tx_queues->single[single_queue_idx];
+
+ list_for_each_entry_safe(sw_txhdr, sw_txhdr_tmp, &agg_queue->hdrs, tx_queue_list) {
+ list_del(&sw_txhdr->tx_queue_list);
+ agg_queue->num_packets--;
+
+ skb = sw_txhdr->skb;
+ tx_info = IEEE80211_SKB_CB(skb);
+
+ if (cl_tx_ctrl_is_amsdu(tx_info)) {
+ cl_tx_amsdu_transfer_single(cl_hw, sw_txhdr);
+ } else {
+ tx_info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+
+ if (cl_tx_8023_to_wlan(cl_hw, skb, cl_sta, sw_txhdr->tid) == 0) {
+ cl_hw->tx_packet_cntr.transfer.agg_to_single++;
+ cl_tx_single(cl_hw, cl_sta, skb, false, false);
+ }
+ }
+
+ cl_sw_txhdr_free(cl_hw, sw_txhdr);
+ }
+
+ /* If queue is empty remove it from schedule list */
+ cl_txq_sched_list_remove_if_empty(agg_queue);
+}
+
+void cl_txq_flush_agg(struct cl_hw *cl_hw, struct cl_tx_queue *tx_queue, bool lock)
+{
+ if (lock) {
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+ cl_txq_flush(cl_hw, tx_queue);
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+ } else {
+ cl_txq_flush(cl_hw, tx_queue);
+ }
+}
+
+void cl_txq_flush_all_agg(struct cl_hw *cl_hw)
+{
+ int i = 0;
+
+ for (i = 0; i < IPC_MAX_BA_SESSIONS; i++)
+ cl_txq_flush(cl_hw, &cl_hw->tx_queues->agg[i]);
+}
+
+void cl_txq_flush_all_single(struct cl_hw *cl_hw)
+{
+ int i = 0;
+
+ for (i = 0; i < MAX_SINGLE_QUEUES; i++)
+ cl_txq_flush(cl_hw, &cl_hw->tx_queues->single[i]);
+}
+
+void cl_txq_flush_sta(struct cl_hw *cl_hw, struct cl_sta *cl_sta)
+{
+ int i = 0;
+ u8 sta_idx = cl_sta->sta_idx;
+ u32 queue_idx = 0;
+ struct cl_tx_queue *tx_queue = NULL;
+
+ spin_lock_bh(&cl_hw->tx_lock_agg);
+
+ /* Flush all aggregation queues for this station */
+ for (i = 0; i < IEEE80211_NUM_TIDS; i++)
+ if (cl_sta->agg_tx_queues[i])
+ cl_txq_flush(cl_hw, cl_sta->agg_tx_queues[i]);
+
+ spin_unlock_bh(&cl_hw->tx_lock_agg);
+
+ spin_lock_bh(&cl_hw->tx_lock_single);
+
+ /* Flush all single queues for this station */
+ for (i = 0; i < AC_MAX; i++) {
+ queue_idx = QUEUE_IDX(sta_idx, i);
+ tx_queue = &cl_hw->tx_queues->single[queue_idx];
+ cl_txq_flush(cl_hw, tx_queue);
+ cl_txq_reset_counters(tx_queue);
+ }
+
+ /* Go over high prioirty queue and delete packets belonging to this station */
+ cl_txq_delete_packets(cl_hw, &cl_hw->tx_queues->single[HIGH_PRIORITY_QUEUE], sta_idx);
+
+ spin_unlock_bh(&cl_hw->tx_lock_single);
+}
+
+void cl_txq_agg_request_add(struct cl_hw *cl_hw, u8 sta_idx, u8 tid)
+{
+ int i = cl_txq_request_find(cl_hw, sta_idx, tid);
+ struct cl_req_agg_db *req_agg_db = NULL;
+
+ if (i != -1) {
+ cl_dbg_trace(cl_hw, "ALREADY_ADDED - entry = %d, sta_idx = %u, tid = %u\n",
+ i, sta_idx, tid);
+ return;
+ }
+
+ for (i = 0; i < IPC_MAX_BA_SESSIONS; i++) {
+ req_agg_db = &cl_hw->req_agg_db[i];
+
+ if (!req_agg_db->is_used) {
+ cl_dbg_trace(cl_hw, "ADD - entry = %d, sta_idx = %u, tid = %u\n",
+ i, sta_idx, tid);
+ req_agg_db->is_used = true;
+ req_agg_db->sta_idx = sta_idx;
+ req_agg_db->tid = tid;
+ cl_hw->req_agg_queues++;
+ return;
+ }
+ }
+}
+
+void cl_txq_agg_request_del(struct cl_hw *cl_hw, u8 sta_idx, u8 tid)
+{
+ int i = cl_txq_request_find(cl_hw, sta_idx, tid);
+
+ if (i != -1) {
+ cl_dbg_trace(cl_hw, "DEL - entry = %d, sta_idx = %u, tid = %u\n",
+ i, sta_idx, tid);
+ cl_hw->req_agg_db[i].is_used = false;
+ cl_hw->req_agg_queues--;
+ }
+}
+
+bool cl_txq_is_agg_available(struct cl_hw *cl_hw)
+{
+ u8 total_agg_queues = cl_hw->used_agg_queues + cl_hw->req_agg_queues;
+
+ return (total_agg_queues < IPC_MAX_BA_SESSIONS);
+}
+
+bool cl_txq_is_fw_empty(struct cl_tx_queue *tx_queue)
+{
+ return (tx_queue->fw_free_space == tx_queue->fw_max_size);
+}
+
+bool cl_txq_is_fw_full(struct cl_tx_queue *tx_queue)
+{
+ return (tx_queue->fw_free_space == 0);
+}
+
+u16 cl_txq_desc_in_fw(struct cl_tx_queue *tx_queue)
+{
+ return (tx_queue->fw_max_size - tx_queue->fw_free_space);
+}
+
+bool cl_txq_frames_pending(struct cl_hw *cl_hw)
+{
+ int i = 0;
+
+ /* Check if we have multicast/bradcast frame in FW queues */
+ if (!cl_txq_is_fw_empty(&cl_hw->tx_queues->bcmc))
+ return true;
+
+ /* Check if we have singles frame in FW queues */
+ for (i = 0; i < MAX_SINGLE_QUEUES; i++)
+ if (!cl_txq_is_fw_empty(&cl_hw->tx_queues->single[i]))
+ return true;
+
+ /* Check if we have aggregation frame in FW queues */
+ for (i = 0; i < IPC_MAX_BA_SESSIONS; i++)
+ if (!cl_txq_is_fw_empty(&cl_hw->tx_queues->agg[i]))
+ return true;
+
+ return false;
+}
+
--
2.36.1


2022-05-24 17:03:36

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 27/96] cl8k: add ela.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/ela.c | 230 +++++++++++++++++++++++++
1 file changed, 230 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/ela.c

diff --git a/drivers/net/wireless/celeno/cl8k/ela.c b/drivers/net/wireless/celeno/cl8k/ela.c
new file mode 100644
index 000000000000..c2419b11b5c0
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/ela.c
@@ -0,0 +1,230 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2021, Celeno Communications Ltd. */
+
+#include "reg/reg_access.h"
+#include "reg/reg_defs.h"
+#include "utils.h"
+#include "ela.h"
+
+#define CL_ELA_MODE_DFLT_ALIAS "default"
+#define CL_ELA_MODE_DFLT_SYMB_LINK "lcu_default.conf"
+#define CL_ELA_MODE_DFLT_OFF "OFF"
+#define CL_ELA_LCU_CONF_TOKENS_CNT 3 /* cmd addr1 addr2 */
+#define CL_ELA_LCU_MEM_WRITE_CMD_STR "mem_write"
+#define CL_ELA_LCU_MEM_WRITE_CMD_SZ sizeof(CL_ELA_LCU_MEM_WRITE_CMD_STR)
+#define CL_ELA_LCU_UNKNOWN_CMD_TYPE 0
+#define CL_ELA_LCU_MEM_WRITE_CMD_TYPE 1
+#define CL_ELA_LCU_UNKNOWN_CMD_STR "unknown"
+
+static int __must_check get_lcu_cmd_type(char *cmd)
+{
+ if (!strncmp(CL_ELA_LCU_MEM_WRITE_CMD_STR, cmd, CL_ELA_LCU_MEM_WRITE_CMD_SZ))
+ return CL_ELA_LCU_MEM_WRITE_CMD_TYPE;
+
+ return CL_ELA_LCU_UNKNOWN_CMD_TYPE;
+}
+
+static int add_lcu_cmd(struct cl_ela_db *ed, u32 type, u32 offset, u32 value)
+{
+ struct cl_lcu_cmd *cmd = NULL;
+
+ cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC);
+ if (!cmd)
+ return -ENOMEM;
+
+ cmd->type = type;
+ cmd->offset = offset;
+ cmd->value = value;
+
+ list_add_tail(&cmd->cmd_list, &ed->cmd_head);
+
+ return 0;
+}
+
+static void remove_lcu_cmd(struct cl_lcu_cmd *cmd)
+{
+ list_del(&cmd->cmd_list);
+ kfree(cmd);
+}
+
+static void reset_stats(struct cl_ela_db *db)
+{
+ memset(&db->stats, 0, sizeof(db->stats));
+}
+
+static int load_cmds_from_buf(struct cl_chip *chip, char *buf, size_t size)
+{
+ struct cl_ela_db *ed = &chip->ela_db;
+ char *line = buf;
+ char cmd[STR_LEN_256B];
+ u32 type = CL_ELA_LCU_UNKNOWN_CMD_TYPE;
+ u32 offset = 0;
+ u32 value = 0;
+ int tokens_cnt = 0;
+ int ret = 0;
+
+ while (line && strlen(line) && (line != (buf + size))) {
+ if ((*line == '#') || (*line == '\n')) {
+ /* Skip comment or blank line */
+ line = strstr(line, "\n") + 1;
+ } else if (*line) {
+ tokens_cnt = sscanf(line, "%s %x %x\n", cmd, &offset, &value);
+ cl_dbg_chip_trace(chip,
+ "tokens(%d):cmd(%s), offset(0x%X), value(0x%X)\n",
+ tokens_cnt, cmd, offset, value);
+
+ type = get_lcu_cmd_type(cmd);
+ if (type == CL_ELA_LCU_UNKNOWN_CMD_TYPE) {
+ cl_dbg_chip_trace(chip, "Detected extra token, skipping\n");
+ goto newline;
+ }
+ if (tokens_cnt != CL_ELA_LCU_CONF_TOKENS_CNT) {
+ cl_dbg_chip_err(chip,
+ "Tokens count is wrong! (%d != %d)\n",
+ CL_ELA_LCU_CONF_TOKENS_CNT,
+ tokens_cnt);
+ ret = -EBADMSG;
+ goto exit;
+ }
+
+ ret = add_lcu_cmd(ed, type, offset, value);
+ if (ret)
+ goto exit;
+
+newline:
+ line = strstr(line, "\n") + 1;
+ }
+ }
+
+exit:
+ ed->stats.adaptations_cnt++;
+ return ret;
+}
+
+void cl_ela_lcu_reset(struct cl_chip *chip)
+{
+ lcu_common_sw_rst_set(chip, 0x1);
+
+ if (chip->cl_hw_tcv0)
+ lcu_phy_lcu_sw_rst_set(chip->cl_hw_tcv0, 0x1);
+
+ if (chip->cl_hw_tcv1)
+ lcu_phy_lcu_sw_rst_set(chip->cl_hw_tcv1, 0x1);
+}
+
+void cl_ela_lcu_apply_config(struct cl_chip *chip)
+{
+ struct cl_ela_db *ed = &chip->ela_db;
+ struct cl_lcu_cmd *cmd = NULL;
+ unsigned long flags;
+
+ if (!cl_ela_lcu_is_valid_config(chip)) {
+ cl_dbg_chip_err(chip, "Active ELA LCU config is not valid\n");
+ return;
+ }
+
+ /* Extra safety to avoid local CPU interference during LCU reconfiguration */
+ local_irq_save(flags);
+ list_for_each_entry(cmd, &ed->cmd_head, cmd_list) {
+ cl_dbg_chip_info(chip, "Writing cmd (0x%X, 0x%X)\n",
+ cmd->offset, cmd->value);
+ if (!chip->cl_hw_tcv1 && cl_reg_is_phy_tcv1(cmd->offset)) {
+ ed->stats.tcv1_skips_cnt++;
+ continue;
+ } else if (!chip->cl_hw_tcv0 && cl_reg_is_phy_tcv0(cmd->offset)) {
+ ed->stats.tcv0_skips_cnt++;
+ continue;
+ }
+ cl_reg_write_chip(chip, cmd->offset, cmd->value);
+ }
+ local_irq_restore(flags);
+ ed->stats.applications_cnt++;
+}
+
+bool cl_ela_is_on(struct cl_chip *chip)
+{
+ return !!strncmp(CL_ELA_MODE_DFLT_OFF, chip->conf->ce_ela_mode,
+ sizeof(chip->conf->ce_ela_mode));
+}
+
+bool cl_ela_is_default(struct cl_chip *chip)
+{
+ return !strncmp(CL_ELA_MODE_DFLT_ALIAS, chip->conf->ce_ela_mode,
+ sizeof(chip->conf->ce_ela_mode));
+}
+
+bool cl_ela_lcu_is_valid_config(struct cl_chip *chip)
+{
+ struct cl_ela_db *ed = &chip->ela_db;
+
+ return ed->error_state == 0;
+}
+
+char *cl_ela_lcu_config_name(struct cl_chip *chip)
+{
+ if (!cl_ela_is_on(chip))
+ return CL_ELA_MODE_DFLT_OFF;
+
+ if (cl_ela_is_default(chip))
+ return CL_ELA_MODE_DFLT_SYMB_LINK;
+
+ return chip->conf->ce_ela_mode;
+}
+
+int cl_ela_lcu_config_read(struct cl_chip *chip)
+{
+ struct cl_ela_db *ed = &chip->ela_db;
+ char filename[CL_FILENAME_MAX] = {0};
+ size_t size = 0;
+ int ret = 0;
+
+ if (!cl_ela_is_on(chip)) {
+ ret = -EOPNOTSUPP;
+ goto exit;
+ }
+
+ reset_stats(ed);
+
+ snprintf(filename, sizeof(filename), "%s", cl_ela_lcu_config_name(chip));
+ size = cl_file_open_and_read(chip, filename, &ed->raw_lcu_config);
+ if (!ed->raw_lcu_config) {
+ ret = -ENODATA;
+ goto exit;
+ }
+
+ ret = load_cmds_from_buf(chip, ed->raw_lcu_config, size);
+exit:
+ ed->error_state = ret;
+ return ret;
+}
+
+int cl_ela_init(struct cl_chip *chip)
+{
+ struct cl_ela_db *ed = &chip->ela_db;
+ int ret = 0;
+
+ INIT_LIST_HEAD(&ed->cmd_head);
+
+ if (!cl_ela_is_on(chip))
+ return 0;
+
+ ret = cl_ela_lcu_config_read(chip);
+ if (ret == 0) {
+ cl_ela_lcu_reset(chip);
+ cl_ela_lcu_apply_config(chip);
+ }
+
+ return ret;
+}
+
+void cl_ela_deinit(struct cl_chip *chip)
+{
+ struct cl_ela_db *ed = &chip->ela_db;
+ struct cl_lcu_cmd *cmd = NULL, *cmd_tmp = NULL;
+
+ kfree(ed->raw_lcu_config);
+ ed->raw_lcu_config = NULL;
+
+ list_for_each_entry_safe(cmd, cmd_tmp, &ed->cmd_head, cmd_list)
+ remove_lcu_cmd(cmd);
+}
--
2.36.1


2022-05-24 17:34:55

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 70/96] cl8k: add scan.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/scan.c | 392 ++++++++++++++++++++++++
1 file changed, 392 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/scan.c

diff --git a/drivers/net/wireless/celeno/cl8k/scan.c b/drivers/net/wireless/celeno/cl8k/scan.c
new file mode 100644
index 000000000000..10076d93620e
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/scan.c
@@ -0,0 +1,392 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/err.h>
+#include <linux/mutex.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+
+#include "channel.h"
+#include "chip.h"
+#include "calib.h"
+#include "debug.h"
+#include "rates.h"
+#include "vif.h"
+#include "hw.h"
+#include "scan.h"
+
+#define CL_MIN_SCAN_TIME_MS 50
+#define CL_MIN_WAIT_TIME_MS 20
+
+static const char SCANNER_KTHREAD_NAME[] = "cl_scanner_kthread";
+
+int cl_scan_channel_switch(struct cl_hw *cl_hw, u8 channel, u8 bw,
+ bool allow_recalib)
+{
+ struct cl_vif *cl_vif = cl_vif_get_first(cl_hw);
+ struct cfg80211_chan_def *chandef = NULL;
+ struct cfg80211_chan_def local_chandef;
+ struct ieee80211_channel *chan = NULL;
+ u16 freq = ieee80211_channel_to_frequency(channel, cl_hw->nl_band);
+ int ret = 0;
+
+ if (!cl_vif) {
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ chandef = &cl_vif->vif->bss_conf.chandef;
+ local_chandef = *chandef;
+
+ chan = ieee80211_get_channel(cl_hw->hw->wiphy, freq);
+ if (!chan) {
+ cl_dbg_err(cl_hw, "Channel %u wasn't found!\n", channel);
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ local_chandef.chan = chan;
+ if (cl_chandef_calc(cl_hw, channel, bw,
+ &local_chandef.width,
+ &local_chandef.chan->center_freq,
+ &local_chandef.center_freq1)) {
+ cl_dbg_err(cl_hw, "Failed to extract chandef data for ch:%d\n",
+ channel);
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ *chandef = local_chandef;
+ cl_hw->hw->conf.chandef = local_chandef;
+
+ if (cl_hw->chip->conf->ce_calib_runtime_en && allow_recalib)
+ ret = cl_calib_runtime_and_switch_channel(cl_hw, channel, bw,
+ freq,
+ chandef->center_freq1);
+ else
+ ret = cl_msg_tx_set_channel(cl_hw, channel, bw, freq,
+ chandef->center_freq1,
+ CL_CALIB_PARAMS_DEFAULT_STRUCT);
+exit:
+ return ret;
+}
+
+static int cl_scan_channel(struct cl_chan_scanner *scanner, u8 ch_idx)
+{
+ u8 main_channel;
+ enum cl_channel_bw main_bw;
+ s32 res = 0;
+ bool is_off_channel;
+ u64 scan_time_jiffies;
+
+ /*
+ * 1. Save current channel
+ * 2. Disable tx
+ * 3. jump to new channel
+ * 4. Enable promiscious
+ * 5. Enable BSS collection
+ * 6. Reset stats counters
+ * 7. Sleep for scan_time
+ * 8. Calculate stats
+ * 9. Disable promiscious
+ * 10. Disable BSS collection
+ * 11. Switch to current channel
+ * 12. Enable tx
+ **/
+
+ cl_dbg_trace(scanner->cl_hw, "Starting scan on channel %u, scan time %u(ms)\n",
+ scanner->channels[ch_idx].channel, scanner->scan_time);
+
+ /* Save current channel */
+ res = mutex_lock_interruptible(&scanner->cl_hw->set_channel_mutex);
+ if (res != 0)
+ return res;
+ main_channel = scanner->cl_hw->channel;
+ main_bw = scanner->cl_hw->bw;
+ mutex_unlock(&scanner->cl_hw->set_channel_mutex);
+
+ cl_dbg_trace(scanner->cl_hw, "Main channel is %u with bw %u\n",
+ main_channel, main_bw);
+
+ is_off_channel = (scanner->channels[ch_idx].channel != main_channel) ||
+ (scanner->channels[ch_idx].scan_bw != main_bw);
+
+ /* Jump to new channel */
+ if (is_off_channel) {
+ /* Disable tx */
+ cl_tx_en(scanner->cl_hw, CL_TX_EN_SCAN, false);
+
+ res = cl_scan_channel_switch(scanner->cl_hw,
+ scanner->channels[ch_idx].channel,
+ scanner->channels[ch_idx].scan_bw,
+ false);
+ if (res) {
+ cl_dbg_err(scanner->cl_hw,
+ "Channel switch failed: ch - %u, bw - %u, err - %d\n",
+ scanner->channels[ch_idx].channel,
+ scanner->channels[ch_idx].scan_bw, res);
+ goto enable_tx;
+ }
+ } else {
+ cl_dbg_trace(scanner->cl_hw, "Scan on main channel %u\n", main_channel);
+ }
+
+ /* Enable promiscious mode */
+ cl_rx_filter_set_promiscuous(scanner->cl_hw);
+
+ /* Reset channel stats */
+ cl_get_initial_channel_stats(scanner->cl_hw, &scanner->channels[ch_idx]);
+
+ /* Sleep for scan time */
+ scan_time_jiffies = msecs_to_jiffies(scanner->scan_time);
+ res = wait_for_completion_interruptible_timeout(&scanner->abort_completion,
+ scan_time_jiffies);
+ if (res > 0) {
+ cl_dbg_err(scanner->cl_hw, "Scan on channel %u, bw %u, idx %u was aborted\n",
+ scanner->channels[ch_idx].channel,
+ scanner->channels[ch_idx].scan_bw, ch_idx);
+ res = 0;
+ }
+
+ /* Calculate stats */
+ cl_get_final_channel_stats(scanner->cl_hw, &scanner->channels[ch_idx]);
+
+ /* Disable promiscious */
+ cl_rx_filter_restore_flags(scanner->cl_hw);
+
+ if (is_off_channel) {
+ res = cl_scan_channel_switch(scanner->cl_hw, main_channel, main_bw, false);
+ if (res)
+ cl_dbg_err(scanner->cl_hw,
+ "Switching to main ch %u, bw %u failed, err - %d\n",
+ main_channel, main_bw, res);
+enable_tx:
+ /* Enable tx */
+ cl_tx_en(scanner->cl_hw, CL_TX_EN_SCAN, true);
+ }
+
+ cl_dbg_trace(scanner->cl_hw, "Scan on channel %u finished, actual scan_time is %u ms\n",
+ scanner->channels[ch_idx].channel, scanner->channels[ch_idx].scan_time_ms);
+
+ return res;
+}
+
+static s32 cl_run_off_channel_scan(struct cl_chan_scanner *scanner)
+{
+ u8 i = 0, scanned_channels = 0;
+ s32 ret = 0;
+
+ for (i = 0; i < scanner->channels_num && !scanner->scan_aborted; ++i) {
+ if (!scanner->channels[i].scan_enabled)
+ continue;
+
+ scanner->curr_ch_idx = i;
+ ret = cl_scan_channel(scanner, i);
+ if (ret)
+ cl_dbg_err(scanner->cl_hw, "scan failed, err - %d, channel - %u\n",
+ ret, scanner->channels[i].channel);
+
+ if (scanner->scan_aborted)
+ break;
+
+ cl_dbg_trace(scanner->cl_hw, "Scan on chan %u finished, waiting for time %u\n",
+ scanner->channels[i].channel, scanner->wait_time);
+
+ ++scanned_channels;
+ if (scanned_channels != scanner->scan_channels_num) {
+ u64 wait_time_jiffies;
+
+ wait_time_jiffies = msecs_to_jiffies(scanner->wait_time);
+ ret = wait_for_completion_interruptible_timeout(&scanner->abort_completion,
+ wait_time_jiffies);
+ if (ret > 0) {
+ cl_dbg_err(scanner->cl_hw, "Off-channel scan was aborted\n");
+ ret = 0;
+ }
+ }
+ }
+
+ if (scanner->completion_cb)
+ scanner->completion_cb(scanner->cl_hw, scanner->completion_arg);
+
+ cl_dbg_info(scanner->cl_hw, "Off-channel scan on %u channels finished\n",
+ scanner->scan_channels_num);
+
+ return ret;
+}
+
+static s32 cl_off_channel_scan_thread_fn(void *args)
+{
+ struct cl_chan_scanner *scanner = args;
+
+ while (!kthread_should_stop()) {
+ if (atomic_read(&scanner->scan_thread_busy)) {
+ cl_run_off_channel_scan(scanner);
+ atomic_set(&scanner->scan_thread_busy, 0);
+ wake_up_interruptible(&scanner->wq);
+ }
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule();
+ }
+
+ return 0;
+}
+
+static bool cl_is_scan_available(struct cl_chan_scanner *scanner)
+{
+ if (atomic_cmpxchg(&scanner->scan_thread_busy, 0, 1) == 1) {
+ cl_dbg_warn(scanner->cl_hw, "Off-channel scan is already in progress\n");
+ return false;
+ }
+
+ return true;
+}
+
+static enum cl_channel_bw cl_scanner_fix_input_bw(struct cl_chan_scanner *scanner, u8 bw)
+{
+ return (bw >= CHNL_BW_MAX) ? scanner->cl_hw->bw : bw;
+}
+
+static void cl_scanner_disable_everywhere(struct cl_chan_scanner *scanner)
+{
+ u8 j;
+
+ for (j = 0; j < scanner->channels_num; ++j)
+ scanner->channels[j].scan_enabled = false;
+}
+
+s32 cl_trigger_off_channel_scan(struct cl_chan_scanner *scanner, u32 scan_time, u32 wait_time,
+ const u8 *channels, enum cl_channel_bw scan_bw, u8 channels_num,
+ void (*completion_cb)(struct cl_hw *cl_hw, void *arg),
+ void *completion_arg)
+{
+ u8 i, j;
+
+ if (!channels || scan_bw > CHNL_BW_MAX)
+ return -EINVAL;
+
+ if (!scanner->scans_enabled)
+ return 0;
+
+ if (channels_num > scanner->channels_num) {
+ cl_dbg_err(scanner->cl_hw, "channels num %u is invalid, max is %u\n",
+ channels_num, scanner->channels_num);
+ return -ERANGE;
+ }
+
+ if (!cl_is_scan_available(scanner))
+ return -EBUSY;
+
+ scanner->completion_arg = completion_arg;
+ scanner->completion_cb = completion_cb;
+ scanner->scan_time = max_t(u32, scan_time, CL_MIN_SCAN_TIME_MS);
+ scanner->wait_time = max_t(u32, wait_time, CL_MIN_WAIT_TIME_MS);
+ scanner->scan_bw = cl_scanner_fix_input_bw(scanner, scan_bw);
+ scanner->scan_aborted = false;
+
+ cl_scanner_disable_everywhere(scanner);
+
+ scanner->scan_channels_num = 0;
+ for (j = 0; j < scanner->channels_num; ++j) {
+ for (i = 0; i < channels_num; ++i) {
+ if (channels[i] != scanner->channels[j].channel)
+ continue;
+
+ if (!cl_chan_info_get(scanner->cl_hw, scanner->channels[j].channel,
+ scanner->scan_bw)) {
+ cl_dbg_warn(scanner->cl_hw, "channel %u with bw %u is disabled\n",
+ scanner->channels[j].channel, scanner->scan_bw);
+ continue;
+ }
+
+ scanner->channels[j].scan_enabled = true;
+ ++scanner->scan_channels_num;
+ }
+ }
+
+ reinit_completion(&scanner->abort_completion);
+
+ wake_up_process(scanner->scan_thread);
+
+ return 0;
+}
+
+void cl_abort_scan(struct cl_chan_scanner *scanner)
+{
+ scanner->scan_aborted = true;
+ complete(&scanner->abort_completion);
+ cl_dbg_info(scanner->cl_hw, "Off-channel scan was aborted\n");
+}
+
+bool cl_is_scan_in_progress(const struct cl_chan_scanner *scanner)
+{
+ return atomic_read(&scanner->scan_thread_busy);
+}
+
+int cl_scanner_init(struct cl_hw *cl_hw)
+{
+ u8 i, j;
+ s32 ret = 0;
+ u32 channels_num;
+ struct cl_chan_scanner *scanner;
+
+ cl_hw->scanner = vzalloc(sizeof(*cl_hw->scanner));
+ if (!cl_hw->scanner)
+ return -ENOMEM;
+
+ scanner = cl_hw->scanner;
+ init_completion(&scanner->abort_completion);
+
+ scanner->cl_hw = cl_hw;
+ scanner->scans_enabled = true;
+
+ channels_num = cl_channel_num(scanner->cl_hw);
+ for (i = 0, j = 0; i < channels_num; ++i) {
+ u32 freq;
+
+ freq = cl_channel_idx_to_freq(cl_hw, i);
+ if (!ieee80211_get_channel(cl_hw->hw->wiphy, freq))
+ continue;
+
+ ret = cl_init_channel_stats(scanner->cl_hw, &scanner->channels[j], freq);
+ if (ret)
+ return ret;
+
+ cl_dbg_trace(scanner->cl_hw, "Stats for channel %u at index %u initialized\n",
+ scanner->channels[j].channel, j);
+ ++j;
+ }
+
+ scanner->channels_num = j;
+
+ atomic_set(&scanner->scan_thread_busy, 0);
+ init_waitqueue_head(&scanner->wq);
+
+ scanner->scan_thread = kthread_run(cl_off_channel_scan_thread_fn,
+ scanner, SCANNER_KTHREAD_NAME);
+ if (IS_ERR(scanner->scan_thread)) {
+ cl_dbg_err(scanner->cl_hw, "unable to create kthread %s, err - %ld\n",
+ SCANNER_KTHREAD_NAME, PTR_ERR(scanner->scan_thread));
+ return PTR_ERR(scanner->scan_thread);
+ }
+ cl_dbg_trace(scanner->cl_hw, "%s kthread was created, pid - %u\n",
+ SCANNER_KTHREAD_NAME, scanner->scan_thread->pid);
+
+ return ret;
+}
+
+void cl_scanner_deinit(struct cl_hw *cl_hw)
+{
+ struct cl_chan_scanner *scanner = cl_hw->scanner;
+
+ if (!scanner->scans_enabled)
+ goto out;
+
+ if (scanner->scan_thread)
+ kthread_stop(scanner->scan_thread);
+
+ out:
+ vfree(cl_hw->scanner);
+ cl_hw->scanner = NULL;
+}
--
2.36.1


2022-05-24 18:02:34

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 73/96] cl8k: add sounding.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/sounding.h | 151 ++++++++++++++++++++
1 file changed, 151 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/sounding.h

diff --git a/drivers/net/wireless/celeno/cl8k/sounding.h b/drivers/net/wireless/celeno/cl8k/sounding.h
new file mode 100644
index 000000000000..abdf834150f9
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/sounding.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_SOUNDING_H
+#define CL_SOUNDING_H
+
+#include <linux/types.h>
+
+#include "fw.h"
+
+#define SOUNDING_ENABLE true
+#define SOUNDING_DISABLE false
+#define INVALID_SID 0xff
+#define XMEM_SIZE (0x180 << 10) /* 384KB */
+#define CL_SOUNDING_STABILITY_TIME 5
+#define CL_SOUNDING_FACTOR 10
+
+#define SOUNDING_FEEDBACK_TYPE_SHIFT 2
+#define SOUNDING_FEEDBACK_TYPE_MASK (BIT(SOUNDING_FEEDBACK_TYPE_SHIFT))
+#define SOUNDING_NG_SHIFT 1
+#define SOUNDING_NG_MASK (BIT(SOUNDING_NG_SHIFT))
+#define SOUNDING_MU_CODEBOOK_SIZE_SHIFT 0
+#define SOUNDING_MU_CODEBOOK_SIZE_MASK (BIT(SOUNDING_MU_CODEBOOK_SIZE_SHIFT))
+#define SOUNDING_FEEDBACK_TYPE_VAL(fb_type_ng_cb_size) (((fb_type_ng_cb_size) & \
+ SOUNDING_FEEDBACK_TYPE_MASK) >> \
+ SOUNDING_FEEDBACK_TYPE_SHIFT)
+#define SOUNDING_NG_VAL(fb_type_ng_cb_size) (((fb_type_ng_cb_size) & \
+ SOUNDING_NG_MASK) >> SOUNDING_NG_SHIFT)
+#define SOUNDING_CODEBOOK_SIZE_VAL(fb_type_ng_cb_size) (((fb_type_ng_cb_size) & \
+ SOUNDING_MU_CODEBOOK_SIZE_MASK) >> \
+ SOUNDING_MU_CODEBOOK_SIZE_SHIFT)
+
+#define SOUNDING_TYPE_IS_VHT(type) ((type) == SOUNDING_TYPE_VHT_SU || \
+ (type) == SOUNDING_TYPE_VHT_MU)
+#define SOUNDING_TYPE_IS_CQI(type) ((type) == SOUNDING_TYPE_HE_CQI || \
+ (type) == SOUNDING_TYPE_HE_CQI_TB)
+
+enum fb_type_ng_cb_size {
+ FEEDBACK_TYPE_SU_NG_4_CODEBOOK_SIZE_4_2 = 0x0,
+ FEEDBACK_TYPE_SU_NG_4_CODEBOOK_SIZE_6_4,
+ FEEDBACK_TYPE_SU_NG_16_CODEBOOK_SIZE_4_2,
+ FEEDBACK_TYPE_SU_NG_16_CODEBOOK_SIZE_6_4,
+ FEEDBACK_TYPE_MU_NG_4_CODEBOOK_SIZE_7_5,
+ FEEDBACK_TYPE_MU_NG_4_CODEBOOK_SIZE_9_7,
+ FEEDBACK_TYPE_CQI_TB,
+ FEEDBACK_TYPE_MU_NG_16_CODEBOOK_SIZE_9_7,
+};
+
+enum cl_sounding_response {
+ CL_SOUNDING_RSP_OK = 0,
+
+ CL_SOUNDING_RSP_ERR_RLIMIT,
+ CL_SOUNDING_RSP_ERR_BW,
+ CL_SOUNDING_RSP_ERR_NSS,
+ CL_SOUNDING_RSP_ERR_INTERVAL,
+ CL_SOUNDING_RSP_ERR_ALREADY,
+ CL_SOUNDING_RSP_ERR_STA,
+ CL_SOUNDING_RSP_ERR_TYPE,
+};
+
+enum sounding_type {
+ SOUNDING_TYPE_HE_SU = 0,
+ SOUNDING_TYPE_HE_SU_TB,
+ SOUNDING_TYPE_VHT_SU,
+ SOUNDING_TYPE_HE_CQI,
+ SOUNDING_TYPE_HE_CQI_TB,
+ SOUNDING_TYPE_HE_MU,
+ SOUNDING_TYPE_VHT_MU,
+
+ SOUNDING_TYPE_MAX
+};
+
+enum sounding_interval_coef {
+ SOUNDING_INTERVAL_COEF_MIN_INTERVAL = 0,
+ SOUNDING_INTERVAL_COEF_STA_STEP,
+ SOUNDING_INTERVAL_COEF_INTERVAL_STEP,
+ SOUNDING_INTERVAL_COEF_MAX_INTERVAL,
+ SOUNDING_INTERVAL_COEF_MAX
+};
+
+struct v_matrix_header {
+ u32 format : 2,
+ rsv1 : 30;
+ u32 bw : 2,
+ nr_index : 3,
+ nc_index : 3,
+ rsv2 : 24;
+ u32 grouping : 4,
+ rsv3 : 28;
+ u32 feedback_type : 1,
+ codebook_info : 3,
+ rsv4 : 28;
+ u32 ru_start_idx : 7,
+ rsv5 : 25;
+ u32 ru_end_idx : 7,
+ rsv6 : 25;
+ u32 padding : 6,
+ rsv7 : 26;
+ u32 rsv8;
+};
+
+struct cl_sounding_info {
+ enum sounding_type type;
+ u8 sounding_id;
+ struct v_matrix_header *v_matrices_data;
+ u32 v_matrices_data_len;
+ u32 v_matrices_dma_addr;
+ u8 gid;
+ u8 bw;
+ u8 sta_num;
+ u8 q_matrix_bitmap;
+ struct cl_sta *su_cl_sta_arr[CL_MU_MAX_STA_PER_GROUP];
+ u32 xmem_space;
+ bool sounding_restart_required;
+ struct list_head list;
+};
+
+struct cl_sounding_db {
+ struct workqueue_struct *sounding_wq;
+ u8 num_soundings;
+ u8 cqi_profiles; /* Num of STAs with CQI active sounding */
+ u8 active_profiles; /* Num of STAs with non-CQI active sounding */
+ u8 active_profiles_prev[CL_SOUNDING_STABILITY_TIME];
+ u8 active_profiles_idx;
+ u8 dbg_level;
+ u8 current_interval;
+ u8 last_conf_active_profiles;
+ rwlock_t list_lock;
+ struct list_head head;
+};
+
+struct cl_xmem {
+ u32 total_used;
+ u32 size;
+};
+
+void cl_sounding_init(struct cl_hw *cl_hw);
+void cl_sounding_close(struct cl_hw *cl_hw);
+void cl_sounding_send_request(struct cl_hw *cl_hw, struct cl_sta **cl_sta_arr,
+ u8 sta_num, bool enable, u8 sounding_type, u8 bw,
+ void *mu_grp,
+ u8 q_matrix_bitmap, struct cl_sounding_info *recovery_elem);
+void cl_sounding_switch_profile(struct cl_hw *cl_hw, u8 sta_idx_en, u8 sta_idx_dis);
+void cl_sounding_stop_by_sid(struct cl_hw *cl_hw, u8 sid, bool sounding_restart_check);
+void cl_sounding_maintenance(struct cl_hw *cl_hw);
+u16 cl_sounding_get_interval(struct cl_hw *cl_hw);
+void cl_sounding_recovery(struct cl_hw *cl_hw);
+struct cl_sounding_info *cl_sounding_get_elem(struct cl_hw *cl_hw, u8 sounding_id);
+void cl_sounding_indication(struct cl_hw *cl_hw, struct mm_sounding_ind *ind);
+
+#endif /* CL_SOUNDING_H */
--
2.36.1


2022-05-24 18:07:35

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 06/96] cl8k: add ampdu.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/ampdu.h | 39 ++++++++++++++++++++++++
1 file changed, 39 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/ampdu.h

diff --git a/drivers/net/wireless/celeno/cl8k/ampdu.h b/drivers/net/wireless/celeno/cl8k/ampdu.h
new file mode 100644
index 000000000000..62c3f60c8c86
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/ampdu.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_AMPDU_H
+#define CL_AMPDU_H
+
+#include "sta.h"
+
+int cl_ampdu_rx_start(struct cl_hw *cl_hw,
+ struct cl_sta *cl_sta,
+ u16 tid,
+ u16 ssn,
+ u16 buf_size);
+void cl_ampdu_rx_stop(struct cl_hw *cl_hw,
+ struct cl_sta *cl_sta,
+ u16 tid);
+int cl_ampdu_tx_start(struct cl_hw *cl_hw,
+ struct ieee80211_vif *vif,
+ struct cl_sta *cl_sta,
+ u16 tid,
+ u16 ssn);
+int cl_ampdu_tx_operational(struct cl_hw *hw,
+ struct cl_sta *cl_sta,
+ u16 tid,
+ u16 buf_size,
+ bool amsdu_supported);
+void _cl_ampdu_tx_stop(struct cl_hw *cl_hw,
+ struct cl_tx_queue *tx_queue,
+ struct cl_sta *cl_sta,
+ u8 tid);
+int cl_ampdu_tx_stop(struct cl_hw *cl_hw,
+ struct ieee80211_vif *vif,
+ enum ieee80211_ampdu_mlme_action action,
+ struct cl_sta *cl_sta,
+ u16 tid);
+void cl_ampdu_size_exp(struct cl_hw *cl_hw, struct ieee80211_sta *sta,
+ u8 *ampdu_exp_he, u8 *ampdu_exp_vht, u8 *ampdu_exp_ht);
+
+#endif /* CL_AMPDU_H */
--
2.36.1


2022-05-24 18:34:58

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 96/96] wireless: add Celeno vendor

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/Kconfig | 1 +
drivers/net/wireless/Makefile | 1 +
2 files changed, 2 insertions(+)

diff --git a/drivers/net/wireless/Kconfig b/drivers/net/wireless/Kconfig
index 7add2002ff4c..444c81e3da06 100644
--- a/drivers/net/wireless/Kconfig
+++ b/drivers/net/wireless/Kconfig
@@ -35,6 +35,7 @@ source "drivers/net/wireless/st/Kconfig"
source "drivers/net/wireless/ti/Kconfig"
source "drivers/net/wireless/zydas/Kconfig"
source "drivers/net/wireless/quantenna/Kconfig"
+source "drivers/net/wireless/celeno/Kconfig"

config PCMCIA_RAYCS
tristate "Aviator/Raytheon 2.4GHz wireless support"
diff --git a/drivers/net/wireless/Makefile b/drivers/net/wireless/Makefile
index 80b324499786..3eb57351d0e5 100644
--- a/drivers/net/wireless/Makefile
+++ b/drivers/net/wireless/Makefile
@@ -20,6 +20,7 @@ obj-$(CONFIG_WLAN_VENDOR_ST) += st/
obj-$(CONFIG_WLAN_VENDOR_TI) += ti/
obj-$(CONFIG_WLAN_VENDOR_ZYDAS) += zydas/
obj-$(CONFIG_WLAN_VENDOR_QUANTENNA) += quantenna/
+obj-$(CONFIG_WLAN_VENDOR_CELENO) += celeno/

# 16-bit wireless PCMCIA client drivers
obj-$(CONFIG_PCMCIA_RAYCS) += ray_cs.o
--
2.36.1


2022-05-24 18:38:38

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 21/96] cl8k: add dfs.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/dfs.h | 146 +++++++++++++++++++++++++
1 file changed, 146 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/dfs.h

diff --git a/drivers/net/wireless/celeno/cl8k/dfs.h b/drivers/net/wireless/celeno/cl8k/dfs.h
new file mode 100644
index 000000000000..a252844f854b
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/dfs.h
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_DFS_H
+#define CL_DFS_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+
+#include "debug.h"
+
+#define CL_DFS_MAX_TBL_LINE 11 /* Radar Table Max Line */
+#define CL_DFS_PULSE_BUF_SIZE 64 /* Radar Pulse buffer size */
+#define CL_DFS_PULSE_BUF_MASK 0x03F /* Radar Pulse buffer cyclic mask */
+#define CL_DFS_PULSE_WINDOW 100 /* Radar Pulse search window [ms] */
+#define CL_DFS_MIN_PULSE_TRIG 1 /* Minimum Pulse trigger num */
+#define CL_DFS_LONG_MIN_WIDTH 20 /* Min Long Pulse Width */
+#define CL_DFS_LONG_FALSE_WIDTH 10 /* Low width signals indicates of false detections */
+#define CL_DFS_LONG_FALSE_IND 6 /* False indication while searching for long sequence */
+#define CL_DFS_STAGGERED_CHEC_LEN 4 /* Staggered check length */
+#define CL_DFS_CONCEAL_CNT 10 /* Maximum concealed pulses search */
+#define CL_DFS_MIN_FREQ 5250 /* Min DFS frequency */
+#define CL_DFS_MAX_FREQ 5725 /* Max DFS frequency */
+
+enum cl_radar_waveform {
+ RADAR_WAVEFORM_SHORT,
+ RADAR_WAVEFORM_LONG,
+ RADAR_WAVEFORM_STAGGERED,
+ RADAR_WAVEFORM_SEVERE
+};
+
+struct cl_radar_type {
+ u8 id;
+ s32 min_width;
+ s32 max_width;
+ s32 tol_width;
+ s32 min_pri;
+ s32 max_pri;
+ s32 tol_pri;
+ s32 tol_freq;
+ u8 min_burst;
+ u8 ppb;
+ u8 trig_count;
+ enum cl_radar_waveform waveform;
+};
+
+/* Number of pulses in a radar event structure */
+#define RADAR_PULSE_MAX 4
+
+/*
+ * Structure used to store information regarding
+ * E2A radar events in the driver
+ */
+struct cl_radar_elem {
+ struct cl_radar_pulse_array *radarbuf_ptr;
+ dma_addr_t dma_addr;
+};
+
+/* Bit mapping inside a radar pulse element */
+struct cl_radar_pulse {
+ u64 freq : 8;
+ u64 fom : 8;
+ u64 len : 8; /* Pulse length timer */
+ u64 measure_cnt : 2; /* Measure count */
+ u64 rsv1 : 6; /* Reserve1 */
+ u64 rep : 16; /* PRI */
+ u64 rsv2 : 16; /* Reserve2 */
+};
+
+/* Definition of an array of radar pulses */
+struct cl_radar_pulse_array {
+ /* Buffer containing the radar pulses */
+ u64 pulse[RADAR_PULSE_MAX];
+ /* Number of valid pulses in the buffer */
+ u32 cnt;
+};
+
+struct cl_radar_queue_elem {
+ struct list_head list;
+ struct cl_hw *cl_hw;
+ struct cl_radar_elem radar_elem;
+ unsigned long time;
+};
+
+struct cl_radar_queue {
+ struct list_head head;
+ spinlock_t lock;
+};
+
+struct cl_dfs_pulse {
+ s32 freq : 8; /* Radar Frequency offset [units of 4MHz] */
+ u32 fom : 8; /* Figure of Merit */
+ u32 width : 8; /* Pulse Width [units of 2 micro sec] */
+ u32 occ : 1; /* OCC indication for Primary/Secondary channel */
+ u32 res1 : 7; /* Reserve */
+ u32 pri : 16; /* Pulse Repetition Frequency */
+ u32 res2 : 16;
+ unsigned long time; /* Pulse Receive Time */
+};
+
+struct cl_dfs_db {
+ bool en;
+ enum cl_dbg_level dbg_lvl;
+ enum nl80211_dfs_regions dfs_standard;
+ struct {
+ bool started;
+ bool requested;
+ } cac;
+ u8 long_pulse_count;
+ u32 last_long_pulse_ts;
+ u8 short_pulse_count;
+ u8 long_pri_match_count;
+ u8 min_pulse_eeq;
+ u8 buf_idx;
+ u8 radar_type_cnt;
+ u16 search_window;
+ u16 max_interrupt_diff;
+ u16 remain_cac_time;
+ u32 pulse_cnt;
+ struct cl_radar_type *radar_type;
+ struct cl_dfs_pulse dfs_pulse[CL_DFS_PULSE_BUF_SIZE];
+ struct cl_dfs_pulse pulse_buffer[CL_DFS_PULSE_BUF_SIZE];
+};
+
+void cl_dfs_init(struct cl_hw *cl_hw);
+void cl_dfs_reinit(struct cl_hw *cl_hw);
+void cl_dfs_start(struct cl_hw *cl_hw);
+void cl_dfs_recovery(struct cl_hw *cl_hw);
+bool cl_dfs_pulse_process(struct cl_hw *cl_hw, struct cl_radar_pulse *pulse, u8 pulse_cnt,
+ unsigned long time);
+bool __must_check cl_dfs_is_in_cac(struct cl_hw *cl_hw);
+bool __must_check cl_dfs_requested_cac(struct cl_hw *cl_hw);
+bool __must_check cl_dfs_radar_listening(struct cl_hw *cl_hw);
+void cl_dfs_request_cac(struct cl_hw *cl_hw, bool should_do);
+void cl_dfs_force_cac_start(struct cl_hw *cl_hw);
+void cl_dfs_force_cac_end(struct cl_hw *cl_hw);
+void cl_dfs_radar_listen_start(struct cl_hw *cl_hw);
+void cl_dfs_radar_listen_end(struct cl_hw *cl_hw);
+
+void cl_radar_init(struct cl_hw *cl_hw);
+void cl_radar_push(struct cl_hw *cl_hw, struct cl_radar_elem *radar_elem);
+void cl_radar_tasklet_schedule(struct cl_hw *cl_hw);
+void cl_radar_flush(struct cl_hw *cl_hw);
+void cl_radar_close(struct cl_hw *cl_hw);
+
+#endif /* CL_DFS_H */
--
2.36.1


2022-05-24 18:50:42

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 30/96] cl8k: add enhanced_tim.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
.../net/wireless/celeno/cl8k/enhanced_tim.h | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/enhanced_tim.h

diff --git a/drivers/net/wireless/celeno/cl8k/enhanced_tim.h b/drivers/net/wireless/celeno/cl8k/enhanced_tim.h
new file mode 100644
index 000000000000..ec40ac03df59
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/enhanced_tim.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_ENHANCED_TIM_H
+#define CL_ENHANCED_TIM_H
+
+#define BCN_IE_TIM_BIT_OFFSET 2
+
+void cl_enhanced_tim_reset(struct cl_hw *cl_hw);
+void cl_enhanced_tim_clear_tx_agg(struct cl_hw *cl_hw, u32 ipc_queue_idx,
+ u8 ac, struct cl_sta *cl_sta, u8 tid);
+void cl_enhanced_tim_clear_tx_single(struct cl_hw *cl_hw, u32 ipc_queue_idx, u8 ac,
+ bool no_ps_buffer, struct cl_sta *cl_sta, u8 tid);
+void cl_enhanced_tim_set_tx_agg(struct cl_hw *cl_hw, u32 ipc_queue_idx, u8 ac,
+ bool no_ps_buffer, struct cl_sta *cl_sta, u8 tid);
+void cl_enhanced_tim_set_tx_single(struct cl_hw *cl_hw, u32 ipc_queue_idx, u8 ac,
+ bool no_ps_buffer, struct cl_sta *cl_sta, u8 tid);
+
+#endif /* CL_ENHANCED_TIM_H */
--
2.36.1


2022-05-24 19:02:06

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 37/96] cl8k: add key.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/key.h | 37 ++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/key.h

diff --git a/drivers/net/wireless/celeno/cl8k/key.h b/drivers/net/wireless/celeno/cl8k/key.h
new file mode 100644
index 000000000000..3731347f8243
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/key.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_KEY_H
+#define CL_KEY_H
+
+#include "hw.h"
+#include "vif.h"
+
+enum cl_key_pn_valid_state {
+ CL_PN_VALID_STATE_SUCCESS,
+ CL_PN_VALID_STATE_FAILED,
+ CL_PN_VALID_STATE_NOT_NEEDED,
+
+ CL_PN_VALID_STATE_MAX
+};
+
+struct cl_key_conf {
+ struct list_head list;
+ struct ieee80211_key_conf *key_conf;
+};
+
+void cl_vif_key_init(struct cl_vif *cl_vif);
+void cl_vif_key_deinit(struct cl_vif *cl_vif);
+int cl_key_set(struct cl_hw *cl_hw,
+ enum set_key_cmd cmd,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key);
+struct ieee80211_key_conf *cl_key_get(struct cl_sta *cl_sta);
+bool cl_key_is_cipher_ccmp_gcmp(struct ieee80211_key_conf *keyconf);
+void cl_key_ccmp_gcmp_pn_to_hdr(u8 *hdr, u64 pn, int key_id);
+u8 cl_key_get_cipher_len(struct sk_buff *skb);
+int cl_key_handle_pn_validation(struct cl_hw *cl_hw, struct sk_buff *skb,
+ struct cl_sta *cl_sta);
+
+#endif /* CL_KEY_H */
--
2.36.1


2022-05-24 19:02:55

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 47/96] cl8k: add motion_sense.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
.../net/wireless/celeno/cl8k/motion_sense.h | 46 +++++++++++++++++++
1 file changed, 46 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/motion_sense.h

diff --git a/drivers/net/wireless/celeno/cl8k/motion_sense.h b/drivers/net/wireless/celeno/cl8k/motion_sense.h
new file mode 100644
index 000000000000..9ea63f561a92
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/motion_sense.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_MOTION_SENSE_H
+#define CL_MOTION_SENSE_H
+
+#include <linux/types.h>
+
+#include "rx.h"
+
+#define MOTION_SENSE_SIZE 30
+
+enum cl_motion_state {
+ STATE_NULL,
+ STATE_MOVING,
+ STATE_STATIC
+};
+
+struct cl_motion_rssi {
+ s32 sum[MAX_ANTENNAS];
+ s32 cnt;
+ s8 history[MOTION_SENSE_SIZE];
+ u8 idx;
+ s8 max;
+ s8 min;
+ enum cl_motion_state state;
+};
+
+struct cl_motion_sense {
+ struct cl_motion_rssi rssi_mgmt_ctl;
+ struct cl_motion_rssi rssi_data;
+ struct cl_motion_rssi rssi_ba;
+ enum cl_motion_state combined_state;
+ enum cl_motion_state forced_state;
+};
+
+void cl_motion_sense_sta_add(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_motion_sense_rssi_mgmt_ctl(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct hw_rxhdr *rxhdr);
+void cl_motion_sense_rssi_data(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct hw_rxhdr *rxhdr);
+void cl_motion_sense_rssi_ba(struct cl_hw *cl_hw, struct cl_sta *cl_sta, s8 rssi[MAX_ANTENNAS]);
+void cl_motion_sense_maintenance(struct cl_hw *cl_hw);
+bool cl_motion_sense_is_static(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+
+#endif /* CL_MOTION_SENSE_H */
--
2.36.1


2022-05-24 19:07:00

by Jeff Johnson

[permalink] [raw]
Subject: Re: [RFC v2 07/96] cl8k: add bf.c

On 5/24/2022 4:33 AM, [email protected] wrote:
[snip]

> +void cl_bf_sounding_start(struct cl_hw *cl_hw, enum sounding_type type, struct cl_sta **cl_sta_arr,
> + u8 sta_num, struct cl_sounding_info *recovery_elem)
> +{
> +#define STA_INDICES_STR_SIZE 64
> +
> + /* Send request to start sounding */
> + u8 i, bw = CHNL_BW_MAX;
> + char sta_indices_str[STA_INDICES_STR_SIZE] = {0};
> + u8 str_len = 0;
> +
> + for (i = 0; i < sta_num; i++) {
> + struct cl_sta *cl_sta = cl_sta_arr[i];
> + struct cl_bf_sta_db *bf_db = &cl_sta->bf_db;
> +
> + bw = cl_sta->wrs_sta.assoc_bw;
> + bf_db->synced = false;
> + bf_db->sounding_start = true;
> + bf_db->sounding_indications = 0;
> +
> + str_len += snprintf(sta_indices_str, STA_INDICES_STR_SIZE - str_len, "%u%s",
> + cl_sta->sta_idx, (i == sta_num - 1 ? ", " : ""));

note that this may not actually be safe from overflow due to the
semantics of the snprintf return value.

using scnprintf() is preferred for this usage pattern

2022-05-24 19:16:17

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 88/96] cl8k: add version.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/version.c | 147 +++++++++++++++++++++
1 file changed, 147 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/version.c

diff --git a/drivers/net/wireless/celeno/cl8k/version.c b/drivers/net/wireless/celeno/cl8k/version.c
new file mode 100644
index 000000000000..1965190a833a
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/version.c
@@ -0,0 +1,147 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "debug.h"
+#include "chip.h"
+#include "rfic.h"
+#include "debug.h"
+#include "version.h"
+
+static int cl_version_request(struct cl_hw *cl_hw)
+{
+ struct mm_version_cfm *cfm = NULL;
+ struct cl_version_db *vd = &cl_hw->version_db;
+ int ret = 0;
+
+ ret = cl_msg_tx_version(cl_hw);
+ if (ret)
+ return ret;
+
+ cfm = (struct mm_version_cfm *)cl_hw->msg_cfm_params[MM_VERSION_CFM];
+ if (!cfm)
+ return -ENOMSG;
+
+ vd->last_update = jiffies;
+ vd->dsp = le32_to_cpu(cfm->versions.dsp);
+ vd->rfic_sw = le32_to_cpu(cfm->versions.rfic_sw);
+ vd->rfic_hw = le32_to_cpu(cfm->versions.rfic_hw);
+ vd->agcram = le32_to_cpu(cfm->versions.agcram);
+
+ cl_hw->rf_crystal_mhz = cfm->rf_crystal_mhz;
+
+ strncpy(vd->fw, cfm->versions.fw, sizeof(vd->fw));
+ vd->fw[sizeof(vd->fw) - 1] = '\0';
+
+ strncpy(vd->drv, CONFIG_CL8K_VERSION, sizeof(vd->drv));
+ vd->drv[sizeof(vd->drv) - 1] = '\0';
+
+ cl_msg_tx_free_cfm_params(cl_hw, MM_VERSION_CFM);
+
+ return ret;
+}
+
+int cl_version_read(struct cl_hw *cl_hw, char *buf, ssize_t buf_size, ssize_t *total_len)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_version_db *vd = &cl_hw->version_db;
+ struct cl_agc_profile *agc_profile1 = &cl_hw->phy_data_info.data->agc_params.profile1;
+ struct cl_agc_profile *agc_profile2 = &cl_hw->phy_data_info.data->agc_params.profile2;
+ ssize_t len = 0;
+ int ret = 0;
+ u32 version_agcram = 0;
+ u32 major = 0;
+ u32 minor = 0;
+ u32 internal = 0;
+
+ /* Request data if existing is not actual */
+ if (!vd->last_update) {
+ ret = cl_version_request(cl_hw);
+ if (ret)
+ return ret;
+ }
+
+ /* PHY components specifics */
+ len += scnprintf(buf + len, buf_size - len, "DRV VERSION: %s\n", vd->drv);
+ len += scnprintf(buf + len, buf_size - len, "FW VERSION: %s\n", vd->fw);
+ len += scnprintf(buf + len, buf_size - len, "DSP VERSION: 0x%-.8X\n", vd->dsp);
+ len += scnprintf(buf + len, buf_size - len, "RFIC SW VERSION: %u\n", vd->rfic_sw);
+ len += scnprintf(buf + len, buf_size - len, "RFIC HW VERSION: 0x%X\n", vd->rfic_hw);
+
+ version_agcram = vd->agcram;
+ major = (version_agcram >> 16) & 0xffff;
+ minor = (version_agcram >> 8) & 0xff;
+ internal = version_agcram & 0xff;
+
+ len += scnprintf(buf + len, buf_size - len,
+ "AGC RAM VERSION: B.%x.%x.%x\n", major, minor, internal);
+
+ if (agc_profile1)
+ cl_agc_params_dump_profile_id(buf, buf_size, &len, agc_profile1->id,
+ "AGC PARAMS PROFILE:");
+ if (agc_profile2)
+ cl_agc_params_dump_profile_id(buf, buf_size, &len, agc_profile2->id,
+ "AGC PARAMS PROFILE (Elastic):");
+
+ len += scnprintf(buf + len, buf_size - len,
+ "TX POWER VERSION: %u\n", cl_hw->tx_power_version);
+
+ switch (chip->conf->ci_phy_dev) {
+ case PHY_DEV_OLYMPUS:
+ len += scnprintf(buf + len, buf_size - len, "RFIC TYPE: OLYMPUS\n");
+ break;
+ case PHY_DEV_ATHOS:
+ len += scnprintf(buf + len, buf_size - len, "RFIC TYPE: %s\n",
+ (cl_hw->chip->rfic_version == ATHOS_A_VER) ? "ATHOS" : "ATHOS B");
+ break;
+ case PHY_DEV_DUMMY:
+ len += scnprintf(buf + len, buf_size - len, "RFIC TYPE: DUMMY\n");
+ break;
+ case PHY_DEV_FRU:
+ len += scnprintf(buf + len, buf_size - len, "RFIC TYPE: FRU\n");
+ break;
+ case PHY_DEV_LOOPBACK:
+ len += scnprintf(buf + len, buf_size - len, "RFIC TYPE: LOOPBACK\n");
+ break;
+ }
+
+ len += scnprintf(buf + len, buf_size - len,
+ "RF CRYSTAL: %uMHz\n", cl_hw->rf_crystal_mhz);
+ len += scnprintf(buf + len, buf_size - len,
+ "CHIP ID: 0X%x\n", cl_chip_get_device_id(cl_hw->chip));
+ *total_len = len;
+
+ return 0;
+}
+
+int cl_version_update(struct cl_hw *cl_hw)
+{
+ char *buf = NULL;
+ ssize_t buf_size = PAGE_SIZE;
+ ssize_t len = 0;
+ int ret = 0;
+
+ buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ /* Force logic to update versions */
+ cl_hw->version_db.last_update = 0;
+
+ ret = cl_version_read(cl_hw, buf, buf_size, &len);
+
+ if (ret == 0) {
+ pr_debug("%s\n", buf);
+ /* Share version info */
+ cl_version_sync_wiphy(cl_hw, cl_hw->hw->wiphy);
+ }
+
+ kfree(buf);
+
+ return ret;
+}
+
+void cl_version_sync_wiphy(struct cl_hw *cl_hw, struct wiphy *wiphy)
+{
+ strncpy(wiphy->fw_version, cl_hw->version_db.fw, sizeof(wiphy->fw_version));
+}
+
--
2.36.1


2022-05-24 19:35:28

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 60/96] cl8k: add recovery.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/recovery.c | 280 ++++++++++++++++++++
1 file changed, 280 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/recovery.c

diff --git a/drivers/net/wireless/celeno/cl8k/recovery.c b/drivers/net/wireless/celeno/cl8k/recovery.c
new file mode 100644
index 000000000000..dc0c33be9200
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/recovery.c
@@ -0,0 +1,280 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "hw.h"
+#include "main.h"
+#include "phy.h"
+#include "vif.h"
+#include "dfs.h"
+#include "maintenance.h"
+#include "vns.h"
+#include "config.h"
+#include "ela.h"
+#include "radio.h"
+#include "recovery.h"
+
+struct cl_recovery_work {
+ struct work_struct ws;
+ struct cl_hw *cl_hw;
+ int reason;
+};
+
+#define RECOVERY_POLL_TIMEOUT 6
+
+static void cl_recovery_poll_completion(struct cl_hw *cl_hw)
+{
+ u8 cntr = 0;
+
+ while (test_bit(CL_DEV_SW_RESTART, &cl_hw->drv_flags)) {
+ msleep(1000);
+
+ if (++cntr == RECOVERY_POLL_TIMEOUT) {
+ cl_dbg_verbose(cl_hw, "\n");
+ cl_dbg_err(cl_hw, "Driver handgup was detected!...");
+ break;
+ }
+ }
+}
+
+static void cl_recovery_start_hw(struct cl_hw *cl_hw)
+{
+ clear_bit(CL_DEV_STOP_HW, &cl_hw->drv_flags);
+
+ /* Restart MAC firmware... */
+ if (cl_main_on(cl_hw)) {
+ cl_dbg_err(cl_hw, "Couldn't turn platform on .. aborting\n");
+ return;
+ }
+
+ if (cl_msg_tx_reset(cl_hw)) {
+ cl_dbg_err(cl_hw, "Failed to send firmware reset .. aborting\n");
+ return;
+ }
+
+ set_bit(CL_DEV_SW_RESTART, &cl_hw->drv_flags);
+ clear_bit(CL_DEV_HW_RESTART, &cl_hw->drv_flags);
+
+ /* Hand over to mac80211 from here */
+ ieee80211_restart_hw(cl_hw->hw);
+ /* Start firmware */
+ if (cl_msg_tx_start(cl_hw)) {
+ cl_dbg_err(cl_hw, "Failed to send firmware start .. aborting\n");
+ return;
+ }
+
+ cl_recovery_poll_completion(cl_hw);
+}
+
+static void cl_recovery_stop_hw(struct cl_hw *cl_hw)
+{
+ /* Start recovery process */
+ ieee80211_stop_queues(cl_hw->hw);
+ cl_hw->recovery_db.in_recovery = true;
+
+ clear_bit(CL_DEV_STARTED, &cl_hw->drv_flags);
+ set_bit(CL_DEV_HW_RESTART, &cl_hw->drv_flags);
+ set_bit(CL_DEV_STOP_HW, &cl_hw->drv_flags);
+ /* Disable interrupts */
+ cl_irq_disable(cl_hw, cl_hw->ipc_e2a_irq.all);
+ cl_maintenance_stop(cl_hw);
+
+ mutex_lock(&cl_hw->dbginfo.mutex);
+
+ cl_main_off(cl_hw);
+
+ cl_hw->fw_active = false;
+ cl_hw->fw_send_start = false;
+
+ mutex_unlock(&cl_hw->dbginfo.mutex);
+
+ /* Reset it so MM_SET_FILTER_REQ will be called during the recovery */
+ cl_hw->rx_filter = 0;
+
+ /*
+ * Reset channel/frequency parameters so that cl_msg_tx_set_channel()
+ * will not be skipped in cl_ops_config()
+ */
+ cl_hw->channel = 0;
+ cl_hw->primary_freq = 0;
+ cl_hw->center_freq = 0;
+}
+
+static void cl_recovery_process(struct cl_hw *cl_hw)
+{
+ int ret;
+ struct cl_chip *chip = cl_hw->chip;
+
+ mutex_lock(&chip->recovery_mutex);
+
+ cl_dbg_verbose(cl_hw, "Start\n");
+
+ cl_recovery_stop_hw(cl_hw);
+
+ if (chip->conf->ci_phy_dev != PHY_DEV_DUMMY) {
+ cl_phy_reset(cl_hw);
+
+ ret = cl_phy_load_recovery(cl_hw);
+ if (ret) {
+ cl_dbg_err(cl_hw, "cl_phy_load_recovery failed %d\n", ret);
+ goto out;
+ }
+ }
+
+ cl_recovery_start_hw(cl_hw);
+
+out:
+ mutex_unlock(&chip->recovery_mutex);
+}
+
+static void cl_recovery_handler(struct cl_hw *cl_hw, int reason)
+{
+ unsigned long recovery_diff = jiffies_to_msecs(jiffies - cl_hw->recovery_db.last_restart);
+
+ cl_hw->recovery_db.restart_cnt++;
+
+ if (recovery_diff > cl_hw->conf->ce_fw_watchdog_limit_time) {
+ cl_hw->recovery_db.restart_cnt = 1;
+ } else if (cl_hw->recovery_db.restart_cnt > cl_hw->conf->ce_fw_watchdog_limit_count) {
+ cl_dbg_verbose(cl_hw, "Too many failures... aborting\n");
+ cl_hw->conf->ce_fw_watchdog_mode = FW_WD_DISABLE;
+ return;
+ }
+
+ cl_hw->recovery_db.last_restart = jiffies;
+
+ /* Count recovery attempts for statistics */
+ cl_hw->fw_recovery_cntr++;
+ cl_dbg_trace(cl_hw, "Recovering from firmware failure, attempt #%i\n",
+ cl_hw->fw_recovery_cntr);
+
+ cl_recovery_process(cl_hw);
+}
+
+static void cl_recovery_work_do(struct work_struct *ws)
+{
+ /* Worker for restarting hw. */
+ struct cl_recovery_work *recovery_work = container_of(ws, struct cl_recovery_work, ws);
+
+ recovery_work->cl_hw->assert_info.restart_sched = false;
+ cl_recovery_handler(recovery_work->cl_hw, recovery_work->reason);
+ kfree(recovery_work);
+}
+
+static void cl_recovery_work_sched(struct cl_hw *cl_hw, int reason)
+{
+ /*
+ * Schedule work to restart device and firmware
+ * This is scheduled when driver detects hw assert storm.
+ */
+ struct cl_recovery_work *recovery_work;
+
+ if (!cl_hw->ipc_env || cl_hw->is_stop_context) {
+ cl_dbg_warn(cl_hw, "Skip recovery - Running down!\n");
+ return;
+ }
+
+ /* If restart is already scheduled - exit */
+ if (cl_hw->assert_info.restart_sched)
+ return;
+
+ cl_hw->assert_info.restart_sched = true;
+
+ /* Recovery_work will be freed by cl_recovery_work_do */
+ recovery_work = kzalloc(sizeof(*recovery_work), GFP_ATOMIC);
+
+ if (!recovery_work)
+ return;
+
+ INIT_WORK(&recovery_work->ws, cl_recovery_work_do);
+ recovery_work->cl_hw = cl_hw;
+ recovery_work->reason = reason;
+
+ queue_work(cl_hw->drv_workqueue, &recovery_work->ws);
+}
+
+bool cl_recovery_in_progress(struct cl_hw *cl_hw)
+{
+ return cl_hw->recovery_db.in_recovery;
+}
+
+void cl_recovery_reconfig_complete(struct cl_hw *cl_hw)
+{
+ clear_bit(CL_DEV_SW_RESTART, &cl_hw->drv_flags);
+
+ if (cl_ela_is_on(cl_hw->chip)) {
+ cl_ela_lcu_reset(cl_hw->chip);
+ cl_ela_lcu_apply_config(cl_hw->chip);
+ }
+
+#ifdef CONFIG_CL8K_DYN_MCAST_RATE
+ cl_dyn_mcast_rate_recovery(cl_hw);
+
+#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
+#ifdef CONFIG_CL8K_DYN_BCAST_RATE
+ cl_dyn_bcast_rate_recovery(cl_hw);
+
+#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
+ /* DFS recovery */
+ cl_dfs_recovery(cl_hw);
+
+ /* VNS recovery */
+ cl_vns_recovery(cl_hw);
+
+ /* Restore EDCA configuration */
+ cl_edca_recovery(cl_hw);
+
+ /* Temperature recovery */
+ cl_temperature_recovery(cl_hw);
+
+ /* Sounding recovery */
+ cl_sounding_recovery(cl_hw);
+
+ /*
+ * Update Tx params for all connected stations to sync firmware after the
+ * recovery process. Should be called after cl_mu_ofdma_grp_recovery to let
+ * MU-OFDMA rates in FW be updated successfully
+ */
+ cl_wrs_api_recovery(cl_hw);
+
+ /* Enable maintenance timers back */
+ cl_maintenance_start(cl_hw);
+ if (cl_radio_is_on(cl_hw)) {
+ /*
+ * Rearm last_tbtt_ind so that error message will
+ * not be printed in cl_irq_status_tbtt()
+ */
+ cl_hw->last_tbtt_irq = jiffies;
+
+ cl_msg_tx_set_idle(cl_hw, MAC_ACTIVE, true);
+ }
+
+ cl_hw->recovery_db.in_recovery = false;
+
+ pr_debug("cl_recovery: complete\n");
+
+ cl_rx_post_recovery(cl_hw);
+}
+
+void cl_recovery_start(struct cl_hw *cl_hw, int reason)
+{
+ /* Prevent new messages to be sent until firmware has recovered */
+ set_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags);
+
+ switch (cl_hw->conf->ce_fw_watchdog_mode) {
+ case FW_WD_DISABLE:
+ cl_dbg_info(cl_hw, "Skip recovery - Watchdog is off!\n");
+ break;
+
+ case FW_WD_INTERNAL_RECOVERY:
+ cl_recovery_work_sched(cl_hw, reason);
+ break;
+
+ case FW_WD_DRV_RELOAD:
+ /* TODO: Implement netlink hint to the userspace */
+ cl_dbg_info(cl_hw, "RELOAD handler is absent, doing nothing");
+ break;
+
+ default:
+ break;
+ }
+}
--
2.36.1


2022-05-24 19:36:13

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 90/96] cl8k: add vif.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/vif.c | 162 +++++++++++++++++++++++++
1 file changed, 162 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/vif.c

diff --git a/drivers/net/wireless/celeno/cl8k/vif.c b/drivers/net/wireless/celeno/cl8k/vif.c
new file mode 100644
index 000000000000..7592f0d32e7a
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/vif.c
@@ -0,0 +1,162 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/list.h>
+
+#include "hw.h"
+#include "mac_addr.h"
+#include "vif.h"
+
+void cl_vif_init(struct cl_hw *cl_hw)
+{
+ rwlock_init(&cl_hw->vif_db.lock);
+ INIT_LIST_HEAD(&cl_hw->vif_db.head);
+}
+
+void cl_vif_add(struct cl_hw *cl_hw, struct cl_vif *cl_vif)
+{
+ struct cl_vif_db *vif_db = &cl_hw->vif_db;
+
+ write_lock_bh(&vif_db->lock);
+ list_add_tail(&cl_vif->list, &vif_db->head);
+
+ if (cl_vif->vif->type != NL80211_IFTYPE_STATION)
+ vif_db->num_iface_bcn++;
+
+ /* Multicast vif set */
+ cl_hw->mc_vif = cl_vif;
+
+ write_unlock_bh(&vif_db->lock);
+}
+
+void cl_vif_remove(struct cl_hw *cl_hw, struct cl_vif *cl_vif)
+{
+ struct cl_vif_db *vif_db = &cl_hw->vif_db;
+
+ write_lock_bh(&vif_db->lock);
+ /* Multicast vif unset */
+ if (cl_hw->mc_vif == cl_vif)
+ cl_hw->mc_vif = cl_vif_get_next(cl_hw, cl_hw->mc_vif);
+
+ list_del(&cl_vif->list);
+
+ if (cl_vif->vif->type != NL80211_IFTYPE_STATION)
+ vif_db->num_iface_bcn--;
+ write_unlock_bh(&vif_db->lock);
+
+ cl_bcmc_cfm_poll_empty_per_vif(cl_hw, cl_vif);
+}
+
+struct cl_vif *cl_vif_get_next(struct cl_hw *cl_hw, struct cl_vif *cl_vif)
+{
+ if (list_is_last(&cl_vif->list, &cl_hw->vif_db.head))
+ return list_first_entry_or_null(&cl_hw->vif_db.head,
+ struct cl_vif, list);
+ else
+ return list_next_entry(cl_vif, list);
+}
+
+struct cl_vif *cl_vif_get_by_dev(struct cl_hw *cl_hw, struct net_device *dev)
+{
+ struct cl_vif *cl_vif = NULL, *cl_vif_ret = NULL;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list)
+ if (cl_vif->dev == dev) {
+ cl_vif_ret = cl_vif;
+ goto unlock;
+ }
+
+unlock:
+ read_unlock_bh(&cl_hw->vif_db.lock);
+ return cl_vif_ret;
+}
+
+struct cl_vif *cl_vif_get_by_mac(struct cl_hw *cl_hw, u8 *mac_addr)
+{
+ struct cl_vif *cl_vif, *cl_vif_ret = NULL;
+
+ read_lock(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list)
+ if (cl_mac_addr_compare(cl_vif->vif->addr, mac_addr)) {
+ cl_vif_ret = cl_vif;
+ goto unlock;
+ }
+
+unlock:
+ read_unlock(&cl_hw->vif_db.lock);
+ return cl_vif_ret;
+}
+
+struct cl_vif *cl_vif_get_first(struct cl_hw *cl_hw)
+{
+ return list_first_entry_or_null(&cl_hw->vif_db.head, struct cl_vif, list);
+}
+
+struct cl_vif *cl_vif_get_first_ap(struct cl_hw *cl_hw)
+{
+ struct cl_vif *cl_vif, *cl_vif_ret = NULL;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list)
+ if (cl_vif->vif->type == NL80211_IFTYPE_AP ||
+ cl_vif->vif->type == NL80211_IFTYPE_MESH_POINT ||
+ cl_hw->conf->ce_listener_en) {
+ cl_vif_ret = cl_vif;
+ goto unlock;
+ }
+
+unlock:
+ read_unlock_bh(&cl_hw->vif_db.lock);
+ return cl_vif_ret;
+}
+
+struct net_device *cl_vif_get_first_net_device(struct cl_hw *cl_hw)
+{
+ struct cl_vif *cl_vif = NULL;
+ struct net_device *dev = NULL;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ cl_vif = list_first_entry_or_null(&cl_hw->vif_db.head, struct cl_vif, list);
+ if (cl_vif)
+ dev = cl_vif->dev;
+ read_unlock_bh(&cl_hw->vif_db.lock);
+
+ return dev;
+}
+
+struct net_device *cl_vif_get_dev_by_index(struct cl_hw *cl_hw, u8 index)
+{
+ struct cl_vif *cl_vif = NULL;
+ struct net_device *dev = NULL;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list)
+ if (cl_vif->vif_index == index) {
+ dev = cl_vif->dev;
+ goto unlock;
+ }
+
+unlock:
+ read_unlock_bh(&cl_hw->vif_db.lock);
+ return dev;
+}
+
+void cl_vif_ap_tx_enable(struct cl_hw *cl_hw, bool enable)
+{
+ struct cl_vif *cl_vif;
+ struct ieee80211_vif *vif;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list) {
+ vif = cl_vif->vif;
+
+ if (vif->type != NL80211_IFTYPE_AP)
+ continue;
+
+ cl_vif->tx_en = enable;
+ cl_dbg_verbose(cl_hw, "Set tx_en=%u for vif_index=%u\n",
+ enable, cl_vif->vif_index);
+ }
+ read_unlock_bh(&cl_hw->vif_db.lock);
+}
--
2.36.1


2022-05-24 19:53:17

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 39/96] cl8k: add mac80211.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/mac80211.h | 197 ++++++++++++++++++++
1 file changed, 197 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/mac80211.h

diff --git a/drivers/net/wireless/celeno/cl8k/mac80211.h b/drivers/net/wireless/celeno/cl8k/mac80211.h
new file mode 100644
index 000000000000..f76c1a0ad820
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/mac80211.h
@@ -0,0 +1,197 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_MAC80211_H
+#define CL_MAC80211_H
+
+#include <linux/types.h>
+#include <linux/if_ether.h>
+#include <net/mac80211.h>
+
+#define PPE_0US 0
+#define PPE_8US 1
+#define PPE_16US 2
+
+/*
+ * Extended Channel Switching capability to be set in the 1st byte of
+ * the @WLAN_EID_EXT_CAPABILITY information element
+ */
+#define WLAN_EXT_CAPA1_2040_BSS_COEX_MGMT_ENABLED BIT(0)
+
+/* WLAN_EID_BSS_COEX_2040 = 72 */
+/* 802.11n 7.3.2.61 */
+struct ieee80211_bss_coex_20_40_ie {
+ u8 element_id;
+ u8 len;
+ u8 info_req : 1;
+ /* Inter-BSS set 1 when prohibits a receiving BSS from operating as a 20/40 Mhz BSS */
+ u8 intolerant40 : 1;
+ /* Intra-BSS set 1 when prohibits a receiving AP from operating its BSS as a 20/40MHz BSS */
+ u8 bss20_width_req : 1;
+ u8 obss_scan_exemp_req : 1;
+ u8 obss_scan_exemp_grant : 1;
+ u8 rsv : 3;
+} __packed;
+
+/* WLAN_EID_BSS_INTOLERANT_CHL_REPORT = 73 */
+/*802.11n 7.3.2.59 */
+struct ieee80211_bss_intolerant_chl_report_ie {
+ u8 element_id;
+ u8 len;
+ u8 regulatory_class;
+ u8 ch_list[0];
+} __packed;
+
+/* Union options that are not included in 'struct ieee80211_mgmt' */
+struct cl_ieee80211_mgmt {
+ __le16 frame_control;
+ __le16 duration;
+ u8 da[ETH_ALEN];
+ u8 sa[ETH_ALEN];
+ u8 bssid[ETH_ALEN];
+ __le16 seq_ctrl;
+ union {
+ struct {
+ __le16 auth_alg;
+ __le16 auth_transaction;
+ __le16 status_code;
+ /* Possibly followed by Challenge text */
+ u8 variable[0];
+ } __packed auth;
+ struct {
+ __le16 reason_code;
+ } __packed deauth;
+ struct {
+ __le16 capab_info;
+ __le16 listen_interval;
+ /* Followed by SSID and Supported rates */
+ u8 variable[0];
+ } __packed assoc_req;
+ struct {
+ __le16 capab_info;
+ __le16 status_code;
+ __le16 aid;
+ /* Followed by Supported rates */
+ u8 variable[0];
+ } __packed assoc_resp, reassoc_resp;
+ struct {
+ __le16 capab_info;
+ __le16 listen_interval;
+ u8 current_ap[ETH_ALEN];
+ /* Followed by SSID and Supported rates */
+ u8 variable[0];
+ } __packed reassoc_req;
+ struct {
+ __le16 reason_code;
+ } __packed disassoc;
+ struct {
+ __le64 timestamp;
+ __le16 beacon_int;
+ __le16 capab_info;
+ /*
+ * Followed by some of SSID, Supported rates,
+ * FH Params, DS Params, CF Params, IBSS Params, TIM
+ */
+ u8 variable[0];
+ } __packed beacon;
+ struct {
+ /* Only variable items: SSID, Supported rates */
+ u8 variable[0];
+ } __packed probe_req;
+ struct {
+ __le64 timestamp;
+ __le16 beacon_int;
+ __le16 capab_info;
+ /*
+ * Followed by some of SSID, Supported rates,
+ * FH Params, DS Params, CF Params, IBSS Params
+ */
+ u8 variable[0];
+ } __packed probe_resp;
+ struct {
+ u8 category;
+ union {
+ struct {
+ u8 action_code;
+ struct ieee80211_bss_coex_20_40_ie bss_coex_20_40_ie;
+ /*
+ * This IE May appear zero or more times,
+ * that situation wasn't handled here.
+ */
+ struct ieee80211_bss_intolerant_chl_report_ie
+ bss_intolerant_chl_report_ie;
+ } __packed coex_2040_mgmt;
+ } u;
+ } __packed action;
+ } u;
+} __packed __aligned(2);
+
+void cl_cap_dyn_params(struct cl_hw *cl_hw);
+void cl_cap_ppe_duration(struct cl_hw *cl_hw, struct ieee80211_sta *sta,
+ u8 pe_dur[CHNL_BW_MAX][WRS_MCS_MAX_HE]);
+u16 cl_cap_set_mesh_basic_rates(struct cl_hw *cl_hw);
+void cl_ops_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control, struct sk_buff *skb);
+void cl_ops_rx_finish(struct ieee80211_hw *hw, struct sk_buff *skb, struct ieee80211_sta *sta);
+int cl_ops_start(struct ieee80211_hw *hw);
+void cl_ops_stop(struct ieee80211_hw *hw);
+int cl_ops_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+void cl_ops_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+int cl_ops_config(struct ieee80211_hw *hw, u32 changed);
+void cl_ops_bss_info_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *info,
+ u32 changed);
+int cl_ops_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+void cl_ops_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+u64 cl_ops_prepare_multicast(struct ieee80211_hw *hw, struct netdev_hw_addr_list *mc_list);
+void cl_ops_configure_filter(struct ieee80211_hw *hw, u32 changed_flags,
+ u32 *total_flags, u64 multicast);
+int cl_ops_set_key(struct ieee80211_hw *hw,
+ enum set_key_cmd cmd,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key);
+void cl_ops_sw_scan_start(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ const u8 *mac_addr);
+void cl_ops_sw_scan_complete(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+int cl_ops_sta_state(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ enum ieee80211_sta_state old_state, enum ieee80211_sta_state new_state);
+void cl_ops_sta_notify(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ enum sta_notify_cmd cmd, struct ieee80211_sta *sta);
+int cl_ops_conf_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ u16 ac_queue,
+ const struct ieee80211_tx_queue_params *params);
+void cl_ops_sta_rc_update(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u32 changed);
+int cl_ops_ampdu_action(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params);
+int cl_ops_post_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+void cl_ops_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, bool drop);
+bool cl_ops_tx_frames_pending(struct ieee80211_hw *hw);
+void cl_ops_reconfig_complete(struct ieee80211_hw *hw,
+ enum ieee80211_reconfig_type reconfig_type);
+int cl_ops_get_txpower(struct ieee80211_hw *hw, struct ieee80211_vif *vif, int *dbm);
+int cl_ops_set_rts_threshold(struct ieee80211_hw *hw, u32 value);
+void cl_ops_event_callback(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ const struct ieee80211_event *event);
+int cl_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set);
+int cl_ops_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant);
+u32 cl_ops_get_expected_throughput(struct ieee80211_hw *hw, struct ieee80211_sta *sta);
+void cl_ops_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo);
+int cl_ops_set_bitrate_mask(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ const struct cfg80211_bitrate_mask *mask);
+int cl_ops_get_survey(struct ieee80211_hw *hw, int idx, struct survey_info *survey);
+int cl_ops_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *hw_req);
+void cl_ops_cancel_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+
+#endif /* CL_MAC80211_H */
--
2.36.1


2022-05-24 20:10:47

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 54/96] cl8k: add power.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/power.c | 1123 ++++++++++++++++++++++
1 file changed, 1123 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/power.c

diff --git a/drivers/net/wireless/celeno/cl8k/power.c b/drivers/net/wireless/celeno/cl8k/power.c
new file mode 100644
index 000000000000..ef62c4b7a332
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/power.c
@@ -0,0 +1,1123 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/string.h>
+
+#include "reg/reg_access.h"
+#include "channel.h"
+#include "debug.h"
+#include "utils.h"
+#include "e2p.h"
+#include "power.h"
+
+static u8 cl_power_table_read(struct cl_hw *cl_hw)
+{
+ u8 pwr_table_id = 0;
+
+ if (cl_e2p_read(cl_hw->chip, &pwr_table_id, 1, ADDR_GEN_PWR_TABLE_ID + cl_hw->tcv_idx))
+ return U8_MAX;
+
+ return pwr_table_id;
+}
+
+static int cl_power_table_fill(struct cl_hw *cl_hw)
+{
+ u8 pwr_table_id = cl_power_table_read(cl_hw);
+ u8 platform_idx = cl_hw->chip->platform.idx;
+ struct cl_platform_table *table = NULL;
+
+ table = cl_platform_get_active_table(cl_hw->chip, platform_idx);
+ if (!table)
+ return cl_hw->chip->conf->ce_production_mode ? 0 : -1;
+
+ switch (pwr_table_id) {
+ case 0:
+ if (cl_band_is_5g(cl_hw)) {
+ memcpy(cl_hw->power_table_info.data->conv_table,
+ table->power_conv_table_5,
+ NUM_POWER_WORDS);
+ cl_hw->tx_power_version = 5;
+ } else if (IS_REAL_PHY(cl_hw->chip)) {
+ CL_DBG_ERROR(cl_hw, "Power table ID (%u) is valid for 5g only\n",
+ pwr_table_id);
+
+ if (!cl_hw_is_prod_or_listener(cl_hw))
+ return -EINVAL;
+ }
+ break;
+ case 1:
+ if (cl_band_is_24g(cl_hw)) {
+ memcpy(cl_hw->power_table_info.data->conv_table,
+ table->power_conv_table_2,
+ NUM_POWER_WORDS);
+ cl_hw->tx_power_version = 25;
+ } else if (IS_REAL_PHY(cl_hw->chip)) {
+ CL_DBG_ERROR(cl_hw, "Power table ID (%u) is valid for 2.4g only\n",
+ pwr_table_id);
+
+ if (!cl_hw_is_prod_or_listener(cl_hw))
+ return -1;
+ }
+ break;
+ case 2:
+ if (cl_band_is_6g(cl_hw)) {
+ memcpy(cl_hw->power_table_info.data->conv_table,
+ table->power_conv_table_6,
+ NUM_POWER_WORDS);
+ cl_hw->tx_power_version = 1;
+ } else if (IS_REAL_PHY(cl_hw->chip)) {
+ CL_DBG_ERROR(cl_hw, "Power table ID (%u) is valid for 6g only\n",
+ pwr_table_id);
+
+ if (!cl_hw_is_prod_or_listener(cl_hw))
+ return -1;
+ }
+ break;
+ default:
+ if (IS_REAL_PHY(cl_hw->chip)) {
+ CL_DBG_ERROR(cl_hw, "Power table ID is not configured in EEPROM\n");
+
+ if (!cl_hw_is_prod_or_listener(cl_hw))
+ return -1;
+ }
+ }
+
+ cl_dbg_verbose(cl_hw, "Power table ID %u (V%u)\n", pwr_table_id, cl_hw->tx_power_version);
+
+ return 0;
+}
+
+int cl_power_table_alloc(struct cl_hw *cl_hw)
+{
+ struct cl_power_table_data *buf = NULL;
+ u32 len = sizeof(struct cl_power_table_data);
+ dma_addr_t phys_dma_addr;
+
+ buf = dma_alloc_coherent(cl_hw->chip->dev, len, &phys_dma_addr, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ cl_hw->power_table_info.data = buf;
+ cl_hw->power_table_info.dma_addr = phys_dma_addr;
+
+ return cl_power_table_fill(cl_hw);
+}
+
+void cl_power_table_free(struct cl_hw *cl_hw)
+{
+ struct cl_power_table_info *power_table_info = &cl_hw->power_table_info;
+ u32 len = sizeof(struct cl_power_table_data);
+ dma_addr_t phys_dma_addr = power_table_info->dma_addr;
+
+ if (!power_table_info->data)
+ return;
+
+ dma_free_coherent(cl_hw->chip->dev, len, (void *)power_table_info->data, phys_dma_addr);
+ power_table_info->data = NULL;
+}
+
+static s32 convert_str_int_q8(s8 *str)
+{
+ s32 x, y;
+
+ if (!str)
+ return 0;
+ if (sscanf(str, "%d.%2d", &x, &y) == 0)
+ return 0;
+ if (!strstr(str, "."))
+ return x << 8;
+ if (y < 10 && (*(strstr(str, ".") + 1) != '0'))
+ y *= 10;
+ return ((x * 100 + y) << 8) / 100;
+}
+
+u8 cl_power_tx_ant(struct cl_hw *cl_hw, enum cl_wrs_mode mode)
+{
+ if (mode == WRS_MODE_CCK)
+ return hweight8(cl_hw->conf->ce_cck_tx_ant_mask);
+
+ if (mode <= WRS_MODE_VHT)
+ return min_t(u8, cl_hw->num_antennas, MAX_ANTENNAS_OFDM_HT_VHT);
+
+ return cl_hw->num_antennas;
+}
+
+s32 cl_power_antenna_gain_q8(struct cl_hw *cl_hw)
+{
+ u8 channel = cl_hw->channel;
+
+ if (channel >= 36 && channel <= 64)
+ return convert_str_int_q8(cl_hw->conf->ce_ant_gain_36_64);
+ else if (channel >= 100 && channel <= 140)
+ return convert_str_int_q8(cl_hw->conf->ce_ant_gain_100_140);
+ else if (channel >= 149 && channel < 165)
+ return convert_str_int_q8(cl_hw->conf->ce_ant_gain_149_165);
+ else
+ return convert_str_int_q8(cl_hw->conf->ce_ant_gain); /* 2.4g and 6g */
+}
+
+s32 cl_power_antenna_gain_q1(struct cl_hw *cl_hw)
+{
+ return cl_power_antenna_gain_q8(cl_hw) >> 7;
+}
+
+s32 cl_power_array_gain_q8(struct cl_hw *cl_hw, u8 tx_ant)
+{
+ /*
+ * Format in NVRAM of ce_arr_gain=A,B,C,D,E,F
+ * A is the array gain with 1 tx_ant, B is with 2 tx_ant and so on...
+ */
+ int arr_gain_val = 0;
+ int arr_gain_len = 0;
+ int idx = 0;
+ char *arr_gain_cpy = NULL;
+ char *arr_gain_cpy_p = NULL;
+ char *arr_gain_str = NULL;
+
+ arr_gain_len = strlen(cl_hw->conf->ce_arr_gain) + 1;
+ arr_gain_cpy_p = kzalloc(arr_gain_len, GFP_ATOMIC);
+ arr_gain_cpy = arr_gain_cpy_p;
+
+ if (!arr_gain_cpy)
+ return 0;
+
+ /* Copy cl_hw->conf->ce_arr_gain so its value won't be changed by strsep() */
+ memcpy(arr_gain_cpy, cl_hw->conf->ce_arr_gain, arr_gain_len);
+
+ /* Arr_gain_str points to the array gain of 1 tx_ant */
+ arr_gain_str = strsep(&arr_gain_cpy, ",");
+
+ /* Only a single value in ce_arr_gain - same value will be applied for all tx_ant */
+ if (!arr_gain_cpy) {
+ arr_gain_val = convert_str_int_q8(arr_gain_str);
+ } else {
+ /* Keep iterating until getting to the correct ant idx */
+ for (idx = 1; arr_gain_str && (idx < tx_ant); idx++)
+ arr_gain_str = strsep(&arr_gain_cpy, ",");
+
+ arr_gain_val = arr_gain_str ? convert_str_int_q8(arr_gain_str) : 0;
+ }
+
+ kfree(arr_gain_cpy_p);
+
+ return arr_gain_val;
+}
+
+s8 cl_power_array_gain_q2(struct cl_hw *cl_hw, u8 tx_ant)
+{
+ return (s8)(cl_power_array_gain_q8(cl_hw, tx_ant) >> 6);
+}
+
+s32 cl_power_array_gain_q1(struct cl_hw *cl_hw, u8 tx_ant)
+{
+ return cl_power_array_gain_q8(cl_hw, tx_ant) >> 7;
+}
+
+static s32 cl_power_bf_gain_q8(struct cl_hw *cl_hw, u8 tx_ant, u8 nss)
+{
+ /*
+ * Format in NVRAM of ce_bf_gain=A,B,C,D
+ * A is the bf gain with 1 NSS, B is with 2 NSS and so on...
+ */
+ int bf_gain_val = 0;
+ int bf_gain_len = 0;
+ int idx = 0;
+ char *bf_gain_cpy = NULL;
+ char *bf_gain_cpy_p = NULL;
+ char *bf_gain_str = NULL;
+ s8 *bf_gain_ptr = NULL;
+
+ if (tx_ant == 6) {
+ bf_gain_ptr = cl_hw->conf->ce_bf_gain_6_ant;
+ } else if (tx_ant == 5) {
+ bf_gain_ptr = cl_hw->conf->ce_bf_gain_5_ant;
+ } else if (tx_ant == 4) {
+ bf_gain_ptr = cl_hw->conf->ce_bf_gain_4_ant;
+ } else if (tx_ant == 3) {
+ bf_gain_ptr = cl_hw->conf->ce_bf_gain_3_ant;
+ } else if (tx_ant == 2) {
+ bf_gain_ptr = cl_hw->conf->ce_bf_gain_2_ant;
+ } else if (tx_ant == 1) {
+ goto out;
+ } else {
+ pr_err("[%s]: invalid tx_ant %u\n", __func__, tx_ant);
+ goto out;
+ }
+
+ bf_gain_len = strlen(bf_gain_ptr) + 1;
+ bf_gain_cpy_p = kzalloc(bf_gain_len, GFP_ATOMIC);
+ bf_gain_cpy = bf_gain_cpy_p;
+
+ if (!bf_gain_cpy)
+ return 0;
+
+ /* Copy cl_hw->conf->ce_bf_gain_*_ant so its value won't be changed by strsep() */
+ memcpy(bf_gain_cpy, bf_gain_ptr, bf_gain_len);
+
+ /* Bf_gain_str points to the bf gain of 1 SS */
+ bf_gain_str = strsep(&bf_gain_cpy, ",");
+
+ /* Keep iterating until getting to the correct ss index */
+ for (idx = 0; bf_gain_str && (idx < nss); idx++)
+ bf_gain_str = strsep(&bf_gain_cpy, ",");
+
+ bf_gain_val = bf_gain_str ? convert_str_int_q8(bf_gain_str) : 0;
+
+ kfree(bf_gain_cpy_p);
+ out:
+ return bf_gain_val;
+}
+
+s32 cl_power_bf_gain_q1(struct cl_hw *cl_hw, u8 tx_ant, u8 nss)
+{
+ return cl_power_bf_gain_q8(cl_hw, tx_ant, nss) >> 7;
+}
+
+static s32 cl_power_min_ant_q8(struct cl_hw *cl_hw)
+{
+ return convert_str_int_q8(cl_hw->conf->ci_min_ant_pwr);
+}
+
+s32 cl_power_min_ant_q1(struct cl_hw *cl_hw)
+{
+ return cl_power_min_ant_q8(cl_hw) >> 7;
+};
+
+s8 cl_power_bw_factor_q2(struct cl_hw *cl_hw, u8 bw)
+{
+ /*
+ * Format in NVRAM of ci_bw_factor=A,B,C,D
+ * A is the bw factor for bw 20MHz, B is for 40MHz and so on..
+ */
+ int bw_factor_val = 0;
+ int bw_factor_len = 0;
+ int idx = 0;
+ char *bw_factor_cpy = NULL;
+ char *bw_factor_cpy_p = NULL;
+ char *bw_factor_str = NULL;
+
+ bw_factor_len = strlen(cl_hw->conf->ci_bw_factor) + 1;
+ bw_factor_cpy = kzalloc(bw_factor_len, GFP_ATOMIC);
+ bw_factor_cpy = bw_factor_cpy_p;
+
+ if (!bw_factor_cpy)
+ return 0;
+
+ /* Copy cl_hw->conf->ci_bw_factor so its value won't be changed by strsep() */
+ memcpy(bw_factor_cpy, cl_hw->conf->ci_bw_factor, bw_factor_len);
+
+ /* Bw_factor_str points to the bw factor of 20MHz */
+ bw_factor_str = strsep(&bw_factor_cpy, ",");
+
+ /* Only a single value in ci_bw_factor - same value will be applied for all bandwidths */
+ if (!bw_factor_cpy) {
+ bw_factor_val = convert_str_int_q8(bw_factor_str);
+ } else {
+ /* Keep iterating until getting to the correct bw index */
+ for (idx = 0; bw_factor_str && (idx < bw); idx++)
+ bw_factor_str = strsep(&bw_factor_cpy, ",");
+
+ bw_factor_val = bw_factor_str ? convert_str_int_q8(bw_factor_str) : 0;
+ }
+
+ kfree(bw_factor_cpy_p);
+
+ return (s8)(bw_factor_val >> 6);
+}
+
+static s32 cl_power_average_calib_q8(struct cl_hw *cl_hw, u8 ant_num)
+{
+ u8 ant = 0, ant_cnt = 0;
+ u8 chan_idx = cl_channel_to_index(cl_hw, cl_hw->channel);
+ s32 total_calib_pow = 0;
+
+ if (chan_idx == INVALID_CHAN_IDX)
+ return 0;
+
+ for (ant = 0; ant < MAX_ANTENNAS && ant_cnt < ant_num; ant++) {
+ if (!(cl_hw->mask_num_antennas & BIT(ant)))
+ continue;
+
+ total_calib_pow += cl_hw->tx_pow_info[chan_idx][ant].power;
+ ant_cnt++;
+ }
+
+ return ((total_calib_pow << 8) / ant_num);
+}
+
+s32 cl_power_average_calib_q1(struct cl_hw *cl_hw, u8 ant_num)
+{
+ return cl_power_average_calib_q8(cl_hw, ant_num) >> 7;
+}
+
+static s32 cl_power_total_q8(struct cl_hw *cl_hw, s8 pwr_offset_q1, u8 tx_ant, u8 nss,
+ enum cl_wrs_mode mode, bool is_auto_resp)
+{
+ s32 bf_gain_q8 = 0;
+ s32 antenna_gain_q8 = cl_power_antenna_gain_q8(cl_hw);
+ s32 array_gain_q8 = cl_power_array_gain_q8(cl_hw, tx_ant);
+ s32 pwr_offset_q8 = (s32)pwr_offset_q1 << 7;
+ s32 calib_power_q8 = cl_power_average_calib_q8(cl_hw, tx_ant);
+ s32 total_power_q8 = 0;
+
+ if (!is_auto_resp)
+ bf_gain_q8 = (mode > WRS_MODE_OFDM) ? cl_power_bf_gain_q8(cl_hw, tx_ant, nss) : 0;
+
+ total_power_q8 = calib_power_q8 + bf_gain_q8 + array_gain_q8 +
+ antenna_gain_q8 + pwr_offset_q8;
+
+ /* FCC calculation */
+ if (cl_hw->channel_info.standard == NL80211_DFS_FCC)
+ total_power_q8 -= min(bf_gain_q8 + antenna_gain_q8, 6 << 8);
+
+ return total_power_q8;
+}
+
+static s32 cl_power_eirp_delta_q1(struct cl_hw *cl_hw, u8 bw, s8 pwr_offset_q1, u8 tx_ant,
+ u8 nss, enum cl_wrs_mode mode, bool is_auto_resp)
+{
+ /* Calculate total TX power */
+ s32 total_power_q8 = cl_power_total_q8(cl_hw, pwr_offset_q1, tx_ant, nss,
+ mode, is_auto_resp);
+
+ /* EIRP power limit */
+ s32 eirp_power_limit_q8 = cl_chan_info_get_eirp_limit_q8(cl_hw, bw);
+
+ /* Delta between total TX power and EIRP limit */
+ return (total_power_q8 - eirp_power_limit_q8) >> 7;
+}
+
+static s8 cl_power_calc_q1(struct cl_hw *cl_hw, s8 mcs_offset_q1, u8 bw, u8 nss,
+ enum cl_wrs_mode mode, bool is_auto_resp, u8 *trunc_pwr_q1)
+{
+ /* Result is in 0.5dBm resolution */
+ u8 tx_ant = cl_power_tx_ant(cl_hw, mode);
+ s32 calib_power_q1 = cl_power_average_calib_q1(cl_hw, tx_ant);
+ s32 res_q1 = calib_power_q1 + mcs_offset_q1;
+ s32 min_pwr_q1 = POWER_MIN_DB_Q1;
+ u32 trunc_pwr_val_q1 = 0;
+ bool eirp_regulatory_en = cl_hw->chip->conf->ce_production_mode ?
+ cl_hw->conf->ce_eirp_regulatory_prod_en : cl_hw->conf->ce_eirp_regulatory_op_en;
+
+ if (cl_hw->channel_info.use_channel_info && eirp_regulatory_en) {
+ s32 delta_power_q1 = cl_power_eirp_delta_q1(cl_hw, bw, mcs_offset_q1,
+ tx_ant, nss, mode, is_auto_resp);
+
+ if (delta_power_q1 > 0) {
+ /*
+ * If tx power is greater than the limitation
+ * subtract delta power from the result
+ */
+ res_q1 -= delta_power_q1;
+ trunc_pwr_val_q1 = delta_power_q1;
+ }
+ }
+
+ if (is_auto_resp)
+ min_pwr_q1 += cl_power_min_ant_q1(cl_hw);
+
+ if (res_q1 < min_pwr_q1) {
+ trunc_pwr_val_q1 = max((s32)trunc_pwr_val_q1 - min_pwr_q1 - res_q1, 0);
+ res_q1 = min_pwr_q1;
+ }
+
+ if (is_auto_resp)
+ res_q1 += cl_power_array_gain_q1(cl_hw, tx_ant);
+
+ if (trunc_pwr_q1)
+ *trunc_pwr_q1 = (u8)trunc_pwr_val_q1;
+
+ return (s8)res_q1;
+}
+
+static s8 cl_power_offset_he(struct cl_hw *cl_hw, u8 bw, u8 mcs)
+{
+ u8 channel = cl_hw->channel;
+ s8 *ppmcs = NULL;
+
+ switch (cl_hw->conf->ci_band_num) {
+ case BAND_5G:
+ if (channel >= 36 && channel <= 64)
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_he_36_64;
+ else if (channel >= 100 && channel <= 140)
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_he_100_140;
+ else
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_he_149_165;
+ break;
+ case BAND_24G:
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_he;
+ break;
+ case BAND_6G:
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_he_6g;
+ break;
+ default:
+ return 0;
+ }
+
+ return ppmcs[mcs] + cl_hw->conf->ce_ppbw_offset[bw];
+}
+
+static s8 cl_power_offset_ht_vht(struct cl_hw *cl_hw, u8 bw, u8 mcs)
+{
+ u8 channel = cl_hw->channel;
+ s8 *ppmcs = NULL;
+
+ switch (cl_hw->conf->ci_band_num) {
+ case BAND_5G:
+ if (channel >= 36 && channel <= 64)
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ht_vht_36_64;
+ else if (channel >= 100 && channel <= 140)
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ht_vht_100_140;
+ else
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ht_vht_149_165;
+ break;
+ case BAND_24G:
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ht;
+ break;
+ case BAND_6G:
+ default:
+ return 0;
+ }
+
+ return ppmcs[mcs] + cl_hw->conf->ce_ppbw_offset[bw];
+}
+
+static s8 cl_power_offset_ofdm(struct cl_hw *cl_hw, u8 mcs)
+{
+ u8 channel = cl_hw->channel;
+ s8 *ppmcs = NULL;
+
+ switch (cl_hw->conf->ci_band_num) {
+ case BAND_5G:
+ if (channel >= 36 && channel <= 64)
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ofdm_36_64;
+ else if (channel >= 100 && channel <= 140)
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ofdm_100_140;
+ else
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ofdm_149_165;
+ break;
+ case BAND_24G:
+ ppmcs = cl_hw->conf->ce_ppmcs_offset_ofdm;
+ break;
+ case BAND_6G:
+ default:
+ return 0;
+ }
+
+ return ppmcs[mcs] + cl_hw->conf->ce_ppbw_offset[CHNL_BW_20];
+}
+
+static s8 cl_power_offset_cck(struct cl_hw *cl_hw, u8 mcs)
+{
+ s8 *ppmcs = cl_hw->conf->ce_ppmcs_offset_cck;
+
+ if (cl_band_is_24g(cl_hw))
+ return ppmcs[mcs] + cl_hw->conf->ce_ppbw_offset[CHNL_BW_20];
+
+ return 0;
+}
+
+s8 cl_power_offset_q1(struct cl_hw *cl_hw, u8 mode, u8 bw, u8 mcs)
+{
+ if (mode == WRS_MODE_HE)
+ return cl_power_offset_he(cl_hw, bw, mcs);
+ else if (mode == WRS_MODE_HT || mode == WRS_MODE_VHT)
+ return cl_power_offset_ht_vht(cl_hw, bw, mcs);
+ else if (mode == WRS_MODE_OFDM)
+ return cl_power_offset_ofdm(cl_hw, mcs);
+ else if (mode == WRS_MODE_CCK)
+ return cl_power_offset_cck(cl_hw, mcs);
+
+ return 0;
+}
+
+#define UPPER_POWER_MARGIN_Q2 (38 << 2)
+#define LOWER_POWER_MARGIN_Q2 (50 << 2)
+
+s8 cl_power_offset_check_margin(struct cl_hw *cl_hw, u8 bw, u8 ant_idx, s8 offset_q2)
+{
+ s8 new_offset_q2 = 0;
+ s8 bw_factor_q2 = cl_hw->power_db.bw_factor_q2[bw];
+ s8 ant_factor_q2 = cl_hw->power_db.ant_factor_q2[ant_idx];
+ s8 total_offset_upper_q2 = bw_factor_q2 + offset_q2;
+ s8 total_offset_lower_q2 = bw_factor_q2 + ant_factor_q2 + offset_q2;
+ bool upper_limit_valid = (total_offset_upper_q2 <= UPPER_POWER_MARGIN_Q2);
+ bool lower_limit_valid = (total_offset_lower_q2 <= LOWER_POWER_MARGIN_Q2);
+
+ if (upper_limit_valid && lower_limit_valid) {
+ return offset_q2;
+ } else if (!upper_limit_valid && lower_limit_valid) {
+ new_offset_q2 = UPPER_POWER_MARGIN_Q2 - bw_factor_q2;
+
+ return new_offset_q2;
+ } else if (upper_limit_valid && !lower_limit_valid) {
+ new_offset_q2 = LOWER_POWER_MARGIN_Q2 - bw_factor_q2 - ant_factor_q2;
+
+ return new_offset_q2;
+ }
+
+ new_offset_q2 = min(UPPER_POWER_MARGIN_Q2 - bw_factor_q2,
+ LOWER_POWER_MARGIN_Q2 - bw_factor_q2 - ant_factor_q2);
+
+ return new_offset_q2;
+}
+
+static s32 cl_power_calc_total_from_eirp_q1(struct cl_hw *cl_hw, s32 tx_power, u8 nss,
+ enum cl_wrs_mode mode, u8 *trunc_pwr_q1)
+{
+ s32 pwr_q1, total_pwr_q1, delta_pwr_q1 = 0;
+ u8 tx_ant;
+ s32 antenna_gain_q1;
+ s32 array_gain_q1;
+ s32 bf_gain_q1;
+ bool eirp_regulatory_en = cl_hw->chip->conf->ce_production_mode ?
+ cl_hw->conf->ce_eirp_regulatory_prod_en : cl_hw->conf->ce_eirp_regulatory_op_en;
+
+ pwr_q1 = tx_power << 1;
+
+ tx_ant = cl_power_tx_ant(cl_hw, mode);
+ array_gain_q1 = cl_power_array_gain_q1(cl_hw, tx_ant);
+ antenna_gain_q1 = cl_power_antenna_gain_q1(cl_hw);
+ /* bf gain is not used for CCK or OFDM */
+ bf_gain_q1 = (mode > WRS_MODE_OFDM) ? cl_power_bf_gain_q1(cl_hw, tx_ant, nss) : 0;
+
+ /* FCC calculation */
+ if (cl_hw->channel_info.standard == NL80211_DFS_FCC)
+ pwr_q1 -= min(bf_gain_q1 + antenna_gain_q1, 6 << 1);
+
+ if (cl_hw->channel_info.use_channel_info && eirp_regulatory_en) {
+ s32 eirp_pwr_limit_q1;
+
+ eirp_pwr_limit_q1 = cl_chan_info_get_eirp_limit_q8(cl_hw, 0) >> 7;
+ if (pwr_q1 > eirp_pwr_limit_q1) {
+ delta_pwr_q1 = pwr_q1 - eirp_pwr_limit_q1;
+ pwr_q1 = eirp_pwr_limit_q1;
+ }
+ }
+
+ total_pwr_q1 = pwr_q1 - antenna_gain_q1 - array_gain_q1 - bf_gain_q1;
+ if (total_pwr_q1 < POWER_MIN_DB_Q1) {
+ delta_pwr_q1 = max(delta_pwr_q1 - (POWER_MIN_DB_Q1 - total_pwr_q1), 0);
+ total_pwr_q1 = POWER_MIN_DB_Q1;
+ }
+
+ if (trunc_pwr_q1)
+ *trunc_pwr_q1 = (u8)delta_pwr_q1;
+
+ return total_pwr_q1;
+}
+
+static s32 cl_power_calc_auto_resp_from_eirp_q1(struct cl_hw *cl_hw, s32 tx_power, u8 nss,
+ enum cl_wrs_mode mode)
+{
+ s32 auto_resp_total_pwr_q1, auto_resp_min_pwr_q1;
+ u8 tx_ant;
+ s32 array_gain_q1;
+ s32 total_pwr_q1;
+
+ auto_resp_min_pwr_q1 = POWER_MIN_DB_Q1 + cl_power_min_ant_q1(cl_hw);
+ tx_ant = cl_power_tx_ant(cl_hw, mode);
+ array_gain_q1 = cl_power_array_gain_q1(cl_hw, tx_ant);
+ total_pwr_q1 = cl_power_calc_total_from_eirp_q1(cl_hw, tx_power, nss, mode, NULL);
+
+ auto_resp_total_pwr_q1 = array_gain_q1 + total_pwr_q1;
+ if (auto_resp_total_pwr_q1 < auto_resp_min_pwr_q1)
+ auto_resp_total_pwr_q1 = auto_resp_min_pwr_q1;
+
+ return auto_resp_total_pwr_q1;
+}
+
+static s8 cl_calc_ant_pwr_q1(struct cl_hw *cl_hw, u8 bw, u8 nss, u8 mcs,
+ enum cl_wrs_mode mode, u8 *trunc_val)
+{
+ s32 eirp_pwr = 0;
+ s8 ant_pwr_q1;
+
+ eirp_pwr = cl_hw->new_tx_power;
+ if (eirp_pwr) {
+ ant_pwr_q1 = cl_power_calc_total_from_eirp_q1(cl_hw, eirp_pwr, nss,
+ mode, trunc_val);
+ } else {
+ s8 pwr_offset_q1;
+
+ pwr_offset_q1 = cl_power_offset_q1(cl_hw, mode, bw, mcs);
+ ant_pwr_q1 = cl_power_calc_q1(cl_hw, pwr_offset_q1, bw, nss,
+ mode, false, trunc_val);
+ }
+ return ant_pwr_q1;
+}
+
+static s8 cl_calc_auto_resp_pwr_q1(struct cl_hw *cl_hw, u8 bw, u8 nss, u8 mcs,
+ enum cl_wrs_mode mode)
+{
+ s32 eirp_pwr = 0;
+ s8 auto_resp_pwr_q1;
+
+ eirp_pwr = cl_hw->new_tx_power;
+ if (eirp_pwr) {
+ auto_resp_pwr_q1 = cl_power_calc_auto_resp_from_eirp_q1(cl_hw, eirp_pwr,
+ nss, mode);
+ } else {
+ s8 pwr_offset_q1;
+
+ pwr_offset_q1 = cl_power_offset_q1(cl_hw, mode, bw, mcs);
+ auto_resp_pwr_q1 = cl_power_calc_q1(cl_hw, pwr_offset_q1, bw, nss,
+ mode, true, NULL);
+ }
+ return auto_resp_pwr_q1;
+}
+
+static void cl_power_tables_update_cck(struct cl_hw *cl_hw,
+ struct cl_pwr_tables *pwr_tables)
+{
+ u8 mcs;
+ u8 trunc_value = 0;
+
+ /* CCK - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_CCK; mcs++) {
+ pwr_tables->ant_pwr_cck[mcs] = cl_calc_ant_pwr_q1(cl_hw, 0, 0, mcs, WRS_MODE_CCK,
+ &trunc_value);
+
+ cl_hw->pwr_trunc.cck[mcs] = trunc_value;
+
+ /* Auto response */
+ pwr_tables->pwr_auto_resp_cck[mcs] = cl_calc_auto_resp_pwr_q1(cl_hw, 0, 0, mcs,
+ WRS_MODE_CCK);
+ }
+}
+
+static void cl_power_tables_update_ofdm(struct cl_hw *cl_hw,
+ struct cl_pwr_tables *pwr_tables)
+{
+ u8 mcs;
+ u8 trunc_value = 0;
+
+ /* OFDM - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_OFDM; mcs++) {
+ pwr_tables->ant_pwr_ofdm[mcs] = cl_calc_ant_pwr_q1(cl_hw, 0, 0, mcs, WRS_MODE_OFDM,
+ &trunc_value);
+ cl_hw->pwr_trunc.ofdm[mcs] = trunc_value;
+
+ /* Auto response */
+ pwr_tables->pwr_auto_resp_ofdm[mcs] = cl_calc_auto_resp_pwr_q1(cl_hw, 0, 0, mcs,
+ WRS_MODE_OFDM);
+ }
+}
+
+static u8 cl_power_tables_update_ht_vht(struct cl_hw *cl_hw,
+ struct cl_pwr_tables *pwr_tables)
+{
+ bool is_24g = cl_band_is_24g(cl_hw);
+ bool is_5g = cl_band_is_5g(cl_hw);
+ u8 bw;
+ u8 nss;
+ u8 mcs;
+ u8 trunc_value = 0;
+ u8 min_bw_idx_limit_vht = 0;
+ u8 max_mcs_ht_vht = (is_5g || (is_24g && cl_hw->conf->ci_vht_cap_24g)) ?
+ WRS_MCS_MAX_VHT : WRS_MCS_MAX_HT;
+ s16 min_bw_limit = 0;
+ s32 eirp_power_limit_q8;
+
+ for (bw = 0, min_bw_limit = 0xFFFF; bw < cl_max_bw_idx(WRS_MODE_VHT, is_24g); bw++) {
+ if (!cl_hw_is_prod_or_listener(cl_hw) &&
+ !cl_chan_info_get(cl_hw, cl_hw->channel, bw))
+ continue;
+
+ /* Find lowest EIRP power limitation among all bw for auto resp calculations */
+ eirp_power_limit_q8 = cl_chan_info_get_eirp_limit_q8(cl_hw, bw);
+ if (eirp_power_limit_q8 < min_bw_limit) {
+ min_bw_limit = eirp_power_limit_q8;
+ min_bw_idx_limit_vht = bw;
+ }
+
+ /* HT/VHT - Enforce EIRP limitations */
+ for (mcs = 0; mcs < max_mcs_ht_vht; mcs++) {
+ for (nss = 0; nss < PWR_TBL_VHT_BF_SIZE; nss++) {
+ pwr_tables->ant_pwr_ht_vht[bw][mcs][nss] =
+ cl_calc_ant_pwr_q1(cl_hw, bw, nss, mcs, WRS_MODE_VHT,
+ &trunc_value);
+ cl_hw->pwr_trunc.ht_vht[bw][mcs][nss] = trunc_value;
+ }
+ }
+ }
+
+ /* Auto resp HT/VHT - Enforce EIRP limitations */
+ for (mcs = 0; mcs < max_mcs_ht_vht; mcs++)
+ pwr_tables->pwr_auto_resp_ht_vht[mcs] =
+ cl_calc_auto_resp_pwr_q1(cl_hw, min_bw_idx_limit_vht, 0, mcs,
+ WRS_MODE_VHT);
+
+ return min_bw_idx_limit_vht;
+}
+
+static u8 cl_power_tables_update_he(struct cl_hw *cl_hw,
+ struct cl_pwr_tables *pwr_tables)
+{
+ bool is_24g = cl_band_is_24g(cl_hw);
+ u8 bw;
+ u8 nss;
+ u8 mcs;
+ u8 trunc_value = 0;
+ u8 min_bw_idx_limit_he = 0;
+ s16 min_bw_limit = 0;
+ s32 eirp_power_limit_q8;
+
+ for (bw = 0, min_bw_limit = 0xFFFF; bw < cl_max_bw_idx(WRS_MODE_HE, is_24g); bw++) {
+ if (!cl_hw_is_prod_or_listener(cl_hw) &&
+ !cl_chan_info_get(cl_hw, cl_hw->channel, bw))
+ continue;
+
+ /* Find lowest EIRP power limitation among all bw for auto resp calculations */
+ eirp_power_limit_q8 = cl_chan_info_get_eirp_limit_q8(cl_hw, bw);
+ if (eirp_power_limit_q8 < min_bw_limit) {
+ min_bw_limit = eirp_power_limit_q8;
+ min_bw_idx_limit_he = bw;
+ }
+
+ /* HE - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_HE; mcs++) {
+ for (nss = 0; nss < PWR_TBL_HE_BF_SIZE; nss++) {
+ pwr_tables->ant_pwr_he[bw][mcs][nss] =
+ cl_calc_ant_pwr_q1(cl_hw, bw, nss, mcs, WRS_MODE_HE,
+ &trunc_value);
+ cl_hw->pwr_trunc.he[bw][mcs][nss] = trunc_value;
+ }
+ }
+ }
+
+ /* Auto resp HE - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_HE; mcs++)
+ pwr_tables->pwr_auto_resp_he[mcs] =
+ cl_calc_auto_resp_pwr_q1(cl_hw, min_bw_idx_limit_he, 0, mcs, WRS_MODE_HE);
+
+ return min_bw_idx_limit_he;
+}
+
+static u8 cl_power_calc_max(struct cl_hw *cl_hw, u8 bw, enum cl_wrs_mode mode)
+{
+ u8 tx_ant = cl_power_tx_ant(cl_hw, mode);
+ /* Total TX power - pass is_auto_resp = true in order to ignore bf gain */
+ s32 total_power_q8 = cl_power_total_q8(cl_hw, 0, tx_ant, 0, mode, true);
+ /* EIRP power limit */
+ s32 eirp_power_limit_q8 = cl_chan_info_get_eirp_limit_q8(cl_hw, bw);
+
+ return (min(total_power_q8, eirp_power_limit_q8) >> 8);
+}
+
+static s8 cl_power_vns_calc_q1(struct cl_hw *cl_hw, u8 bw,
+ enum cl_wrs_mode mode, bool is_auto_resp)
+{
+ u8 max_tx_pwr = cl_power_calc_max(cl_hw, bw, mode);
+ u8 tx_ant = cl_power_tx_ant(cl_hw, mode);
+ s32 vns_pwr_limit_q8 = min_t(u8, cl_hw->conf->ci_vns_pwr_limit, max_tx_pwr) << 8;
+ s32 antenna_gain_q8 = cl_power_antenna_gain_q8(cl_hw);
+ s32 array_gain_q8 = (is_auto_resp ? 0 : cl_power_array_gain_q8(cl_hw, tx_ant));
+ s32 min_ant_pwr_q8 = cl_power_min_ant_q8(cl_hw);
+ s32 min_pwr_q8 = is_auto_resp ? (POWER_MIN_DB_Q8 + min_ant_pwr_q8) : POWER_MIN_DB_Q8;
+ s32 res_q8 = vns_pwr_limit_q8 - antenna_gain_q8 - array_gain_q8;
+
+ if (res_q8 < min_pwr_q8)
+ res_q8 = min_pwr_q8;
+
+ /* Result should be in 0.5dBm resolution */
+ return (s8)(res_q8 >> 7);
+}
+
+static void cl_power_tables_update_vns(struct cl_hw *cl_hw,
+ struct cl_pwr_tables *pwr_tables,
+ u8 min_bw_idx_limit_vht,
+ u8 min_bw_idx_limit_he)
+{
+ /* VNS */
+ pwr_tables->ant_pwr_vns_he =
+ cl_power_vns_calc_q1(cl_hw, min_bw_idx_limit_he, WRS_MODE_HE, false);
+ pwr_tables->ant_pwr_vns_ht_vht =
+ cl_power_vns_calc_q1(cl_hw, min_bw_idx_limit_vht, WRS_MODE_VHT, false);
+ pwr_tables->ant_pwr_vns_ofdm =
+ cl_power_vns_calc_q1(cl_hw, 0, WRS_MODE_OFDM, false);
+ pwr_tables->ant_pwr_vns_cck =
+ cl_power_vns_calc_q1(cl_hw, 0, WRS_MODE_CCK, false);
+
+ /* Auto response VNS */
+ pwr_tables->pwr_auto_resp_vns_he =
+ cl_power_vns_calc_q1(cl_hw, min_bw_idx_limit_he, WRS_MODE_HE, true);
+ pwr_tables->pwr_auto_resp_vns_ht_vht =
+ cl_power_vns_calc_q1(cl_hw, min_bw_idx_limit_vht, WRS_MODE_VHT, true);
+ pwr_tables->pwr_auto_resp_vns_ofdm =
+ cl_power_vns_calc_q1(cl_hw, 0, WRS_MODE_OFDM, true);
+ pwr_tables->pwr_auto_resp_vns_cck =
+ cl_power_vns_calc_q1(cl_hw, 0, WRS_MODE_CCK, true);
+}
+
+static void cl_power_tables_update_by_offset(struct cl_hw *cl_hw,
+ struct cl_pwr_tables *pwr_tables,
+ s8 offset)
+{
+ u8 mcs = 0;
+ u8 bw = 0;
+ u8 nss = 0;
+
+ /* CCK - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_CCK; mcs++) {
+ pwr_tables->ant_pwr_cck[mcs] += offset;
+
+ /* Auto response */
+ pwr_tables->pwr_auto_resp_cck[mcs] += offset;
+ }
+
+ /* OFDM - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_OFDM; mcs++) {
+ pwr_tables->ant_pwr_ofdm[mcs] += offset;
+
+ /* Auto response */
+ pwr_tables->pwr_auto_resp_ofdm[mcs] += offset;
+ }
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++) {
+ /* HT/VHT - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_VHT; mcs++) {
+ for (nss = 0; nss < PWR_TBL_VHT_BF_SIZE; nss++)
+ pwr_tables->ant_pwr_ht_vht[bw][mcs][nss] += offset;
+
+ /*
+ * Auto response:
+ * always with disabled BF so the offset of the last nss is used
+ */
+ pwr_tables->pwr_auto_resp_ht_vht[mcs] += offset;
+ }
+
+ /* HE - Enforce EIRP limitations */
+ for (mcs = 0; mcs < WRS_MCS_MAX_HE; mcs++) {
+ for (nss = 0; nss < PWR_TBL_HE_BF_SIZE; nss++)
+ pwr_tables->ant_pwr_he[bw][mcs][nss] += offset;
+
+ /*
+ * Auto response:
+ * always with disabled BF so the offset of the last nss is used
+ */
+ pwr_tables->pwr_auto_resp_he[mcs] += offset;
+ }
+ }
+}
+
+static s8 cl_power_get_offset(u16 percentage)
+{
+ if (percentage >= 94)
+ return 0;
+ else if (percentage >= 84)
+ return -1; /* -0.5dBm */
+ else if (percentage >= 75)
+ return -2; /* -1dBm */
+ else if (percentage >= 67)
+ return -3; /* -1.5dBm */
+ else if (percentage >= 59)
+ return -4; /* -2dBm */
+ else if (percentage >= 54)
+ return -5; /* -2.5dBm */
+ else if (percentage >= 48)
+ return -6; /* -3dBm */
+ else if (percentage >= 43)
+ return -7; /* -3.5dBm */
+ else if (percentage >= 38)
+ return -8; /* -4dBm */
+ else if (percentage >= 34)
+ return -9; /* -4.5dBm */
+ else if (percentage >= 30)
+ return -10; /* -5dBm */
+ else if (percentage >= 27)
+ return -11; /* -5.5dBm */
+ else if (percentage >= 24)
+ return -12; /* -6dBm */
+ else if (percentage >= 22)
+ return -13; /* -6.5dBm */
+ else if (percentage >= 19)
+ return -14; /* -7dBm */
+ else if (percentage >= 17)
+ return -15; /* -7.5dBm */
+ else if (percentage >= 15)
+ return -16; /* -8dBm */
+ else if (percentage >= 14)
+ return -17; /* -8.5dBm */
+ else if (percentage >= 12)
+ return -18; /* -9dBm */
+ else if (percentage >= 11)
+ return -19; /* -9.5dBm */
+ else if (percentage >= 10)
+ return -20; /* -10dBm */
+ else if (percentage >= 9)
+ return -21; /* -10.5dBm */
+ else if (percentage >= 8)
+ return -22; /* -11dBm */
+ else if (percentage >= 7)
+ return -23; /* -11.5dBm */
+ else if (percentage >= 6)
+ return -24; /* -12dBm */
+ else if (percentage >= 5)
+ return -26; /* -13dBm */
+ else if (percentage >= 4)
+ return -28; /* -14dBm */
+ else if (percentage >= 3)
+ return -30; /* -15dBm */
+ else if (percentage >= 2)
+ return -34; /* -17dBm */
+ else if (percentage >= 1)
+ return -40; /* -20dBm */
+
+ /* Should not get here */
+ return 0;
+}
+
+static void cl_power_control_apply_percentage(struct cl_hw *cl_hw)
+{
+ struct cl_power_db *power_db = &cl_hw->power_db;
+ u8 percentage = cl_hw->conf->ce_tx_power_control;
+
+ power_db->curr_percentage = percentage;
+
+ if (percentage != 100) {
+ power_db->curr_offset = cl_power_get_offset(percentage);
+ cl_power_tables_update_by_offset(cl_hw,
+ &cl_hw->phy_data_info.data->pwr_tables,
+ power_db->curr_offset);
+ }
+}
+
+void cl_power_tables_update(struct cl_hw *cl_hw, struct cl_pwr_tables *pwr_tables)
+{
+ bool is_24g = cl_band_is_24g(cl_hw);
+ bool is_6g = cl_band_is_6g(cl_hw);
+ u8 min_bw_idx_limit_he = 0;
+ u8 min_bw_idx_limit_vht = 0;
+
+ memset(pwr_tables, 0, sizeof(struct cl_pwr_tables));
+
+ if (is_24g)
+ cl_power_tables_update_cck(cl_hw, pwr_tables);
+
+ cl_power_tables_update_ofdm(cl_hw, pwr_tables);
+
+ if (!is_6g)
+ min_bw_idx_limit_vht = cl_power_tables_update_ht_vht(cl_hw, pwr_tables);
+
+ min_bw_idx_limit_he = cl_power_tables_update_he(cl_hw, pwr_tables);
+
+ cl_hw->new_tx_power = 0;
+
+ cl_power_tables_update_vns(cl_hw, pwr_tables, min_bw_idx_limit_vht, min_bw_idx_limit_he);
+
+ cl_power_control_apply_percentage(cl_hw);
+}
+
+static s32 cl_power_get_max_cck(struct cl_hw *cl_hw)
+{
+ struct cl_pwr_tables *pwr_tables = &cl_hw->phy_data_info.data->pwr_tables;
+ u8 mcs = 0;
+ u8 tx_ant = cl_power_tx_ant(cl_hw, WRS_MODE_CCK);
+ s32 ant_gain_q1 = cl_power_antenna_gain_q1(cl_hw);
+ s32 arr_gain_q1 = cl_power_array_gain_q1(cl_hw, tx_ant);
+ s32 total_pwr_q1 = 0;
+ s32 max_pwr_q1 = 0;
+
+ for (mcs = 0; mcs < WRS_MCS_MAX_CCK; mcs++) {
+ total_pwr_q1 = pwr_tables->ant_pwr_cck[mcs] + ant_gain_q1 + arr_gain_q1;
+
+ if (total_pwr_q1 > max_pwr_q1)
+ max_pwr_q1 = total_pwr_q1;
+ }
+
+ return max_pwr_q1;
+}
+
+static s32 cl_power_get_max_ofdm(struct cl_hw *cl_hw)
+{
+ struct cl_pwr_tables *pwr_tables = &cl_hw->phy_data_info.data->pwr_tables;
+ u8 mcs = 0;
+ u8 tx_ant = cl_power_tx_ant(cl_hw, WRS_MODE_OFDM);
+ s32 ant_gain_q1 = cl_power_antenna_gain_q1(cl_hw);
+ s32 arr_gain_q1 = cl_power_array_gain_q1(cl_hw, tx_ant);
+ s32 total_pwr_q1 = 0;
+ s32 max_pwr_q1 = 0;
+
+ for (mcs = 0; mcs < WRS_MCS_MAX_OFDM; mcs++) {
+ total_pwr_q1 = pwr_tables->ant_pwr_ofdm[mcs] + ant_gain_q1 + arr_gain_q1;
+
+ if (total_pwr_q1 > max_pwr_q1)
+ max_pwr_q1 = total_pwr_q1;
+ }
+
+ return max_pwr_q1;
+}
+
+static s32 cl_power_get_max_ht_vht(struct cl_hw *cl_hw)
+{
+ struct cl_pwr_tables *pwr_tables = &cl_hw->phy_data_info.data->pwr_tables;
+ u8 tx_ant = cl_power_tx_ant(cl_hw, WRS_MODE_VHT);
+ u8 mcs = 0;
+ u8 bw = 0;
+ u8 bf = 0;
+ s32 ant_gain_q1 = cl_power_antenna_gain_q1(cl_hw);
+ s32 arr_gain_q1 = cl_power_array_gain_q1(cl_hw, tx_ant);
+ s32 total_pwr_q1 = 0;
+ s32 max_pwr_q1 = 0;
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++) {
+ for (mcs = 0; mcs < WRS_MCS_MAX_VHT; mcs++) {
+ for (bf = 0; bf < PWR_TBL_VHT_BF_SIZE; bf++) {
+ total_pwr_q1 = pwr_tables->ant_pwr_ht_vht[bw][mcs][bf] +
+ ant_gain_q1 + arr_gain_q1;
+
+ if (total_pwr_q1 > max_pwr_q1)
+ max_pwr_q1 = total_pwr_q1;
+ }
+ }
+ }
+
+ return max_pwr_q1;
+}
+
+static s32 cl_power_get_max_he(struct cl_hw *cl_hw)
+{
+ struct cl_pwr_tables *pwr_tables = &cl_hw->phy_data_info.data->pwr_tables;
+ u8 tx_ant = cl_power_tx_ant(cl_hw, WRS_MODE_HE);
+ u8 mcs = 0;
+ u8 bw = 0;
+ u8 bf = 0;
+ s32 ant_gain_q1 = cl_power_antenna_gain_q1(cl_hw);
+ s32 arr_gain_q1 = cl_power_array_gain_q1(cl_hw, tx_ant);
+ s32 total_pwr_q1 = 0;
+ s32 max_pwr_q1 = 0;
+
+ for (bw = 0; bw < CHNL_BW_MAX; bw++) {
+ for (mcs = 0; mcs < WRS_MCS_MAX_HE; mcs++) {
+ for (bf = 0; bf < PWR_TBL_HE_BF_SIZE; bf++) {
+ total_pwr_q1 = pwr_tables->ant_pwr_he[bw][mcs][bf] +
+ ant_gain_q1 + arr_gain_q1;
+
+ if (total_pwr_q1 > max_pwr_q1)
+ max_pwr_q1 = total_pwr_q1;
+ }
+ }
+ }
+
+ return max_pwr_q1;
+}
+
+s32 cl_power_get_max(struct cl_hw *cl_hw)
+{
+ bool is_24g = cl_band_is_24g(cl_hw);
+ bool is_6g = cl_band_is_6g(cl_hw);
+ s32 max_pwr_cck_q1 = is_24g ? cl_power_get_max_cck(cl_hw) : S32_MIN;
+ s32 max_pwr_ofdm_q1 = cl_power_get_max_ofdm(cl_hw);
+ s32 max_pwr_ht_vht_q1 = !is_6g ? cl_power_get_max_ht_vht(cl_hw) : S32_MIN;
+ s32 max_pwr_he_q1 = cl_power_get_max_he(cl_hw);
+ s32 max_pwr_q1 = 0;
+
+ max_pwr_q1 = max(max_pwr_q1, max_pwr_cck_q1);
+ max_pwr_q1 = max(max_pwr_q1, max_pwr_ofdm_q1);
+ max_pwr_q1 = max(max_pwr_q1, max_pwr_ht_vht_q1);
+ max_pwr_q1 = max(max_pwr_q1, max_pwr_he_q1);
+
+ return (max_pwr_q1 >> 1);
+}
+
--
2.36.1


2022-05-24 22:17:30

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 83/96] cl8k: add traffic.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/traffic.h | 77 ++++++++++++++++++++++
1 file changed, 77 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/traffic.h

diff --git a/drivers/net/wireless/celeno/cl8k/traffic.h b/drivers/net/wireless/celeno/cl8k/traffic.h
new file mode 100644
index 000000000000..1b820602c94a
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/traffic.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_TRAFFIC_H
+#define CL_TRAFFIC_H
+
+#include <linux/dcache.h>
+#include <linux/skbuff.h>
+
+enum cl_traffic_mon_protocol {
+ CL_TRFC_MON_PROT_TCP,
+ CL_TRFC_MON_PROT_UDP,
+ CL_TRFC_MON_PROT_MAX,
+};
+
+enum cl_traffic_mon_direction {
+ CL_TRFC_MON_DIR_DL,
+ CL_TRFC_MON_DIR_UL,
+ CL_TRFC_MON_DIR_MAX,
+};
+
+struct cl_traffic_mon {
+ u32 bytes_per_sec;
+ u32 bytes;
+};
+
+enum cl_traffic_direction {
+ TRAFFIC_DIRECTION_TX,
+ TRAFFIC_DIRECTION_RX,
+
+ TRAFFIC_DIRECTION_MAX
+};
+
+enum cl_traffic_level {
+ TRAFFIC_LEVEL_DRV,
+ TRAFFIC_LEVEL_BF,
+ TRAFFIC_LEVEL_MU,
+ TRAFFIC_LEVEL_EDCA,
+ TRAFFIC_LEVEL_DFS,
+
+ TRAFFIC_LEVEL_MAX
+};
+
+struct cl_traffic_activity {
+ u8 cntr_active;
+ u8 cntr_idle;
+ bool is_active;
+};
+
+struct cl_traffic_sta {
+ struct cl_traffic_activity activity_db[TRAFFIC_LEVEL_MAX];
+ u32 num_bytes;
+ u32 num_bytes_prev;
+};
+
+struct cl_traffic_main {
+ u32 num_active_sta[TRAFFIC_LEVEL_MAX];
+ u32 num_active_sta_dir[TRAFFIC_DIRECTION_MAX][TRAFFIC_LEVEL_MAX];
+ u32 active_bytes_thr[TRAFFIC_LEVEL_MAX];
+ bool dynamic_cts;
+};
+
+struct cl_sta;
+struct cl_hw;
+
+void cl_traffic_mon_tx(struct cl_sta *cl_sta, struct sk_buff *skb);
+void cl_traffic_mon_rx(struct cl_sta *cl_sta, struct sk_buff *skb);
+void cl_traffic_mon_sta_maintenance(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+void cl_traffic_init(struct cl_hw *cl_hw);
+void cl_traffic_tx_handler(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u32 num_bytes);
+void cl_traffic_rx_handler(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u32 num_bytes);
+void cl_traffic_maintenance(struct cl_hw *cl_hw);
+void cl_traffic_sta_remove(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+bool cl_traffic_is_sta_active(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+bool cl_traffic_is_sta_tx_exist(struct cl_hw *cl_hw, struct cl_sta *cl_sta);
+
+#endif /* CL_TRAFFIC_H */
--
2.36.1


2022-05-25 02:55:03

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 64/96] cl8k: add reg/reg_access.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
.../net/wireless/celeno/cl8k/reg/reg_access.h | 199 ++++++++++++++++++
1 file changed, 199 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/reg/reg_access.h

diff --git a/drivers/net/wireless/celeno/cl8k/reg/reg_access.h b/drivers/net/wireless/celeno/cl8k/reg/reg_access.h
new file mode 100644
index 000000000000..c9d00f7553ea
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/reg/reg_access.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_REG_ACCESS_H
+#define CL_REG_ACCESS_H
+
+#include "hw.h"
+#include "chip.h"
+
+#define hwreg_pr(...) \
+ do { \
+ if (cl_hw->reg_dbg) \
+ cl_dbg_verbose(__VA_ARGS__); \
+ } while (0)
+
+#define chipreg_pr(...) \
+ do { \
+ if (chip->reg_dbg) \
+ cl_dbg_chip_verbose(__VA_ARGS__); \
+ } while (0)
+
+#define XTENSA_PIF_BASE_ADDR 0x60000000
+
+/*
+ * SHARED_RAM Address.
+ * Actually the PCI BAR4 window will be configured such as SHARED RAM
+ * is accessed with offset 0 (within the AHB Bridge main window)
+ */
+#define SHARED_RAM_START_ADDR 0x00000000
+#define REG_MAC_HW_SMAC_OFFSET 0x80000
+#define REG_PHY_LMAC_OFFSET 0x000000
+#define REG_PHY_SMAC_OFFSET 0x100000
+#define REG_MACDSP_API_BASE_ADDR 0x00400000
+#define REG_MAC_HW_BASE_ADDR 0x00600000
+#define REG_RIU_BASE_ADDR 0x00486000
+#define REG_RICU_BASE_ADDR 0x004B4000
+#define APB_REGS_BASE_ADDR 0x007C0000
+#define I2C_REG_BASE_ADDR (APB_REGS_BASE_ADDR + 0x3000)
+
+/* MACSYS_GCU_XT_CONTROL fields */
+#define SMAC_DEBUG_ENABLE BIT(21)
+#define SMAC_BREAKPOINT BIT(20)
+#define SMAC_OCD_HALT_ON_RESET BIT(19)
+#define SMAC_RUN_STALL BIT(18)
+#define SMAC_DRESET BIT(17)
+#define SMAC_BRESET BIT(16)
+#define UMAC_DEBUG_ENABLE BIT(13)
+#define UMAC_BREAKPOINT BIT(11)
+#define UMAC_OCD_HALT_ON_RESET BIT(11)
+#define UMAC_RUN_STALL BIT(10)
+#define UMAC_DRESET BIT(9)
+#define UMAC_BRESET BIT(8)
+#define LMAC_DEBUG_ENABLE BIT(5)
+#define LMAC_BREAKPOINT BIT(4)
+#define LMAC_OCD_HALT_ON_RESET BIT(3)
+#define LMAC_RUN_STALL BIT(2)
+#define LMAC_DRESET BIT(1)
+#define LMAC_BRESET BIT(0)
+
+#define XMAC_BRESET \
+ (LMAC_BRESET | SMAC_BRESET | UMAC_BRESET)
+#define XMAC_DRESET \
+ (LMAC_DRESET | SMAC_DRESET | UMAC_DRESET)
+#define XMAC_RUN_STALL \
+ (LMAC_RUN_STALL | SMAC_RUN_STALL | UMAC_RUN_STALL)
+#define XMAC_OCD_HALT_ON_RESET \
+ (LMAC_OCD_HALT_ON_RESET | SMAC_OCD_HALT_ON_RESET | UMAC_OCD_HALT_ON_RESET)
+#define XMAC_DEBUG_ENABLE \
+ (LMAC_DEBUG_ENABLE | SMAC_DEBUG_ENABLE | UMAC_DEBUG_ENABLE)
+
+static inline u32 get_actual_reg(struct cl_hw *cl_hw, u32 reg)
+{
+ if (!cl_hw)
+ return -1;
+
+ if ((reg & 0x00ff0000) == REG_MAC_HW_BASE_ADDR)
+ return cl_hw->mac_hw_regs_offset + reg;
+
+ if ((reg & 0x00f00000) == REG_MACDSP_API_BASE_ADDR) {
+ if (cl_hw->chip->conf->ci_phy_dev == PHY_DEV_DUMMY)
+ return -1;
+ return cl_hw->phy_regs_offset + reg;
+ }
+
+ return reg;
+}
+
+static inline bool cl_reg_is_phy_tcvX(u32 phy_reg, u32 reg_offset)
+{
+ return (phy_reg & 0xf00000) == (REG_MACDSP_API_BASE_ADDR + reg_offset);
+}
+
+static inline bool cl_reg_is_phy_tcv0(u32 phy_reg)
+{
+ return cl_reg_is_phy_tcvX(phy_reg, REG_PHY_LMAC_OFFSET);
+}
+
+static inline bool cl_reg_is_phy_tcv1(u32 phy_reg)
+{
+ return cl_reg_is_phy_tcvX(phy_reg, REG_PHY_SMAC_OFFSET);
+}
+
+static inline u32 cl_reg_read(struct cl_hw *cl_hw, u32 reg)
+{
+ u32 actual_reg = get_actual_reg(cl_hw, reg);
+ u32 val = 0;
+
+ if (actual_reg == (u32)(-1))
+ return 0xff;
+
+ val = ioread32(cl_hw->chip->pci_bar0_virt_addr + actual_reg);
+ hwreg_pr(cl_hw, "reg=0x%x, val=0x%x\n", actual_reg, val);
+ return val;
+}
+
+static inline void cl_reg_write_direct(struct cl_hw *cl_hw, u32 reg, u32 val)
+{
+ u32 actual_reg = get_actual_reg(cl_hw, reg);
+
+ if (actual_reg == (u32)(-1))
+ return;
+
+ hwreg_pr(cl_hw, "reg=0x%x, val=0x%x\n", actual_reg, val);
+ iowrite32(val, cl_hw->chip->pci_bar0_virt_addr + actual_reg);
+}
+
+#define BASE_ADDR(reg) ((ptrdiff_t)(reg) & 0x00fff000)
+
+static inline bool should_send_msg(struct cl_hw *cl_hw, u32 reg)
+{
+ /*
+ * Check in what cases we should send a message to the firmware,
+ * and in what cases we should write directly.
+ */
+ if (!cl_hw->fw_active)
+ return false;
+
+ return ((BASE_ADDR(reg) == REG_RIU_BASE_ADDR) ||
+ (BASE_ADDR(reg) == REG_MAC_HW_BASE_ADDR));
+}
+
+static inline int cl_reg_write(struct cl_hw *cl_hw, u32 reg, u32 val)
+{
+ u32 actual_reg = get_actual_reg(cl_hw, reg);
+ int ret = 0;
+
+ if (actual_reg == (u32)(-1))
+ return -1;
+
+ if (should_send_msg(cl_hw, reg)) {
+ hwreg_pr(cl_hw, "calling cl_msg_tx_reg_write: reg=0x%x, val=0x%x\n",
+ actual_reg, val);
+ cl_msg_tx_reg_write(cl_hw, (XTENSA_PIF_BASE_ADDR + actual_reg), val, U32_MAX);
+ } else {
+ hwreg_pr(cl_hw, "reg=0x%x, val=0x%x\n", actual_reg, val);
+ iowrite32(val, cl_hw->chip->pci_bar0_virt_addr + actual_reg);
+ }
+
+ return ret;
+}
+
+static inline int cl_reg_write_mask(struct cl_hw *cl_hw, u32 reg, u32 val, u32 mask)
+{
+ u32 actual_reg = get_actual_reg(cl_hw, reg);
+ int ret = 0;
+
+ if (actual_reg == (u32)(-1))
+ return -1;
+
+ if (should_send_msg(cl_hw, reg)) {
+ hwreg_pr(cl_hw, "calling cl_msg_tx_reg_write: reg=0x%x, val=0x%x, mask=0x%x\n",
+ actual_reg, val, mask);
+ cl_msg_tx_reg_write(cl_hw, (XTENSA_PIF_BASE_ADDR + actual_reg), val, mask);
+ } else {
+ u32 reg_rd = ioread32(cl_hw->chip->pci_bar0_virt_addr + actual_reg);
+ u32 val_write = ((reg_rd & ~mask) | (val & mask));
+
+ hwreg_pr(cl_hw, "reg=0x%x, mask=0x%x, val=0x%x\n", actual_reg, mask, val_write);
+ iowrite32(val_write, cl_hw->chip->pci_bar0_virt_addr + actual_reg);
+ }
+
+ return ret;
+}
+
+static inline void cl_reg_write_chip(struct cl_chip *chip, u32 reg, u32 val)
+{
+ chipreg_pr(chip, "reg=0x%x, val=0x%x\n", reg, val);
+ iowrite32(val, chip->pci_bar0_virt_addr + reg);
+}
+
+static inline u32 cl_reg_read_chip(struct cl_chip *chip, u32 reg)
+{
+ u32 val = ioread32(chip->pci_bar0_virt_addr + reg);
+
+ chipreg_pr(chip, "reg=0x%x, val=0x%x\n", reg, val);
+ return val;
+}
+
+#endif /* CL_REG_ACCESS_H */
--
2.36.1


2022-05-25 07:12:08

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 20/96] cl8k: add dfs.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/dfs.c | 768 +++++++++++++++++++++++++
1 file changed, 768 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/dfs.c

diff --git a/drivers/net/wireless/celeno/cl8k/dfs.c b/drivers/net/wireless/celeno/cl8k/dfs.c
new file mode 100644
index 000000000000..f320e5885a58
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/dfs.c
@@ -0,0 +1,768 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include "chip.h"
+#include "utils.h"
+#include "debug.h"
+#include "temperature.h"
+#include "traffic.h"
+#include "reg/reg_defs.h"
+#include "config.h"
+#include "debug.h"
+#include "dfs.h"
+
+#define dfs_pr(cl_hw, level, ...) \
+ do { \
+ if ((level) <= (cl_hw)->dfs_db.dbg_lvl) \
+ pr_debug(__VA_ARGS__); \
+ } while (0)
+
+#define dfs_pr_verbose(cl_hw, ...) dfs_pr((cl_hw), DBG_LVL_VERBOSE, ##__VA_ARGS__)
+#define dfs_pr_err(cl_hw, ...) dfs_pr((cl_hw), DBG_LVL_ERROR, ##__VA_ARGS__)
+#define dfs_pr_warn(cl_hw, ...) dfs_pr((cl_hw), DBG_LVL_WARNING, ##__VA_ARGS__)
+#define dfs_pr_trace(cl_hw, ...) dfs_pr((cl_hw), DBG_LVL_TRACE, ##__VA_ARGS__)
+#define dfs_pr_info(cl_hw, ...) dfs_pr((cl_hw), DBG_LVL_INFO, ##__VA_ARGS__)
+
+/*
+ * ID Min Max Tol Min Max Tol Tol MIN PPB Trig Type
+ * Width Width Width PRI PRI PRI FREQ Burst Count
+ */
+
+/* ETSI Radar Types v1.8.2 */
+static struct cl_radar_type radar_type_etsi[] = {
+ {0, 1, 10, 2, 1428, 1428, 2, 1, 1, 18, 10, RADAR_WAVEFORM_SHORT},
+ {1, 1, 10, 2, 1000, 5000, 2, 1, 1, 10, 5, RADAR_WAVEFORM_SHORT},
+ {2, 1, 15, 2, 625, 5000, 2, 1, 1, 15, 8, RADAR_WAVEFORM_SHORT},
+ {3, 1, 15, 2, 250, 435, 2, 1, 1, 25, 9, RADAR_WAVEFORM_SHORT},
+ {4, 10, 30, 2, 250, 500, 2, 1, 1, 20, 9, RADAR_WAVEFORM_SHORT},
+ {5, 1, 10, 2, 2500, 3334, 2, 1, 2, 10, 5, RADAR_WAVEFORM_STAGGERED},
+ {6, 1, 10, 2, 833, 2500, 2, 1, 2, 15, 8, RADAR_WAVEFORM_STAGGERED},
+};
+
+/* FCC Radar Types 8/14 */
+static struct cl_radar_type radar_type_fcc[] = {
+ {0, 1, 10, 0, 1428, 1428, 1, 1, 1, 18, 10, RADAR_WAVEFORM_SHORT},
+ {1, 1, 10, 3, 518, 3066, 3, 1, 1, 18, 10, RADAR_WAVEFORM_SHORT},
+ {2, 1, 10, 3, 150, 230, 3, 1, 1, 23, 10, RADAR_WAVEFORM_SHORT},
+ {3, 3, 10, 3, 200, 500, 3, 1, 1, 16, 6, RADAR_WAVEFORM_SHORT},
+ {4, 6, 20, 3, 200, 500, 3, 1, 1, 12, 6, RADAR_WAVEFORM_SHORT},
+ {5, 50, 100, 50, 1000, 2000, 1, 1, 2, 10, 5, RADAR_WAVEFORM_LONG},
+ {6, 1, 10, 0, 333, 333, 1, 1, 2, 30, 10, RADAR_WAVEFORM_LONG},
+};
+
+static void cl_dfs_en(struct cl_hw *cl_hw, u8 dfs_en)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+
+ cl_msg_tx_set_dfs(cl_hw, dfs_en, dfs_db->dfs_standard,
+ conf->ci_dfs_initial_gain, conf->ci_dfs_agc_cd_th);
+ dfs_pr_verbose(cl_hw, "DFS: %s\n", dfs_en ? "Enable" : "Disable");
+}
+
+static bool cl_dfs_create_detection_buffer(struct cl_hw *cl_hw, struct cl_dfs_db *dfs_db,
+ struct cl_dfs_pulse *pulse_buffer, u8 *samples_cnt,
+ unsigned long time)
+{
+ u8 i;
+ u8 pulse_idx;
+ /* Init First index to last */
+ u8 first_pulse_idx = (dfs_db->buf_idx - 1 + CL_DFS_PULSE_BUF_SIZE) & CL_DFS_PULSE_BUF_MASK;
+
+ /* Find Start Pulse indexes */
+ for (i = 0; i < CL_DFS_PULSE_BUF_SIZE; i++) {
+ pulse_idx = (i + dfs_db->buf_idx) & CL_DFS_PULSE_BUF_MASK;
+
+ if ((time - dfs_db->dfs_pulse[pulse_idx].time) < dfs_db->search_window) {
+ first_pulse_idx = pulse_idx;
+ break;
+ }
+ }
+
+ dfs_pr_info(cl_hw, "DFS: First pulse idx = %u, Last pulse idx = %u\n",
+ first_pulse_idx, (dfs_db->buf_idx - 1 + CL_DFS_PULSE_BUF_SIZE)
+ & CL_DFS_PULSE_BUF_MASK);
+
+ if (dfs_db->buf_idx >= first_pulse_idx + 1) {
+ if ((dfs_db->buf_idx - first_pulse_idx) < dfs_db->min_pulse_eeq)
+ goto not_enough_pulses;
+ } else {
+ if ((dfs_db->buf_idx + CL_DFS_PULSE_BUF_SIZE - first_pulse_idx) <
+ dfs_db->min_pulse_eeq)
+ goto not_enough_pulses;
+ }
+
+ /* Copy the processed samples to local Buf to avoid index castings */
+ for (i = 0; pulse_idx != ((dfs_db->buf_idx - 1 + CL_DFS_PULSE_BUF_SIZE)
+ & CL_DFS_PULSE_BUF_MASK); i++) {
+ pulse_idx = (i + first_pulse_idx) & CL_DFS_PULSE_BUF_MASK;
+ memcpy(&pulse_buffer[i], &dfs_db->dfs_pulse[pulse_idx], sizeof(pulse_buffer[i]));
+ }
+ *samples_cnt = i + 1;
+
+ return true;
+not_enough_pulses:
+ /* Return if buffer don't hold enough valid samples */
+ dfs_pr_warn(cl_hw, "DFS: Not enough pulses in buffer\n");
+
+ return false;
+}
+
+static bool cl_dfs_is_valid_dfs_freq(struct cl_hw *cl_hw, u32 freq_off)
+{
+ u16 freq = cl_hw->center_freq + freq_off;
+ u16 freq_min = max((u16)(cl_hw->center_freq - cl_center_freq_offset(cl_hw->bw) - 10),
+ (u16)CL_DFS_MIN_FREQ);
+ u16 freq_max = min((u16)(cl_hw->center_freq + cl_center_freq_offset(cl_hw->bw) + 10),
+ (u16)CL_DFS_MAX_FREQ);
+
+ if (freq > freq_min && freq < freq_max)
+ return true;
+
+ return false;
+}
+
+static void cl_dfs_add_pulses_to_global_buffer(struct cl_hw *cl_hw, struct cl_dfs_db *dfs_db,
+ struct cl_radar_pulse *pulse, u8 pulse_cnt,
+ unsigned long time)
+{
+ int i;
+
+ for (i = 0; i < pulse_cnt; i++)
+ dfs_pr_info(cl_hw, "Pulse=%d, Width=%u, PRI=%u, FREQ=%d, Time=%lu, FOM=%x\n",
+ i, pulse[i].len, pulse[i].rep, pulse[i].freq, time, pulse[i].fom);
+
+ /* Maintain cyclic pulse buffer */
+ for (i = 0; i < pulse_cnt; i++) {
+ if (!cl_dfs_is_valid_dfs_freq(cl_hw, (u32)pulse[i].freq))
+ continue;
+
+ dfs_db->dfs_pulse[dfs_db->buf_idx].freq = pulse[i].freq;
+ dfs_db->dfs_pulse[dfs_db->buf_idx].width = pulse[i].len;
+ dfs_db->dfs_pulse[dfs_db->buf_idx].pri = pulse[i].rep;
+ dfs_db->dfs_pulse[dfs_db->buf_idx].fom = pulse[i].fom;
+ dfs_db->dfs_pulse[dfs_db->buf_idx].occ = 0; /* occ temp disabled. */
+ dfs_db->dfs_pulse[dfs_db->buf_idx].time = time;
+
+ dfs_db->buf_idx++;
+ dfs_db->buf_idx &= CL_DFS_PULSE_BUF_MASK;
+ }
+}
+
+static bool cl_dfs_buf_maintain(struct cl_hw *cl_hw, struct cl_radar_pulse *pulse,
+ struct cl_dfs_pulse *pulse_buffer, u8 pulse_cnt,
+ unsigned long time, u8 *samples_cnt, struct cl_dfs_db *dfs_db)
+{
+ int i;
+
+ cl_dfs_add_pulses_to_global_buffer(cl_hw, dfs_db, pulse, pulse_cnt, time);
+ if (!cl_dfs_create_detection_buffer(cl_hw, dfs_db, pulse_buffer, samples_cnt, time))
+ return false;
+
+ for (i = 0; i < *samples_cnt; i++)
+ dfs_pr_info(cl_hw, "DFS: pulse[%d]: width=%u, pri=%u, freq=%d\n",
+ i, pulse_buffer[i].width, pulse_buffer[i].pri, pulse_buffer[i].freq);
+
+ return true;
+}
+
+static inline bool cl_dfs_pulse_match(s32 pulse_val, s32 spec_min_val,
+ s32 spec_max_val, s32 spec_tol)
+{
+ return ((pulse_val >= (spec_min_val - spec_tol)) &&
+ (pulse_val <= (spec_max_val + spec_tol)));
+}
+
+static u8 cl_dfs_is_stag_pulse(struct cl_hw *cl_hw, struct cl_dfs_db *dfs_db,
+ struct cl_dfs_pulse *pulse)
+{
+ int i;
+ struct cl_radar_type *radar_type;
+
+ for (i = 0; i < dfs_db->radar_type_cnt; i++) {
+ radar_type = &dfs_db->radar_type[i];
+
+ if (radar_type->waveform != RADAR_WAVEFORM_STAGGERED)
+ continue;
+
+ if (cl_dfs_pulse_match((s32)pulse->width, radar_type->min_width,
+ radar_type->max_width, radar_type->tol_width) &&
+ cl_dfs_pulse_match((s32)pulse->pri, radar_type->min_pri,
+ radar_type->max_pri, radar_type->tol_pri)) {
+ /* Search for the second burst */
+ if (abs(pulse[0].pri - pulse[2].pri) <= dfs_db->radar_type[i].tol_pri &&
+ abs(pulse[1].pri - pulse[3].pri) <= radar_type->tol_pri &&
+ abs(pulse[0].pri - pulse[1].pri) > radar_type->tol_pri &&
+ abs(pulse[2].pri - pulse[3].pri) > radar_type->tol_pri) {
+ dfs_pr_info(cl_hw, "DFS: Found match type %d\n", i);
+ return (i + 1);
+ } else if (abs(pulse[0].pri - pulse[3].pri) <= radar_type->tol_pri &&
+ abs(pulse[1].pri - pulse[4].pri) <= radar_type->tol_pri &&
+ abs(pulse[0].pri - pulse[1].pri) > radar_type->tol_pri &&
+ abs(pulse[3].pri - pulse[4].pri) > radar_type->tol_pri) {
+ dfs_pr_info(cl_hw, "DFS: Found match radar %d\n", i);
+ return (i + 1);
+ }
+ }
+ }
+
+ return 0;
+}
+
+static u8 cl_dfs_is_non_stag_pulse(struct cl_hw *cl_hw, struct cl_dfs_db *dfs_db,
+ struct cl_dfs_pulse *pulse)
+{
+ int i;
+ struct cl_radar_type *radar_type;
+
+ for (i = 0; i < dfs_db->radar_type_cnt; i++) {
+ radar_type = &dfs_db->radar_type[i];
+
+ if (radar_type->waveform == RADAR_WAVEFORM_STAGGERED)
+ continue;
+
+ if (cl_dfs_pulse_match((s32)pulse->width, radar_type->min_width,
+ radar_type->max_width, radar_type->tol_width) &&
+ cl_dfs_pulse_match((s32)pulse->pri, radar_type->min_pri,
+ radar_type->max_pri, radar_type->tol_pri)) {
+ dfs_pr_info(cl_hw, "DFS: Found match type %d\n", i);
+ return (i + 1);
+ }
+ }
+
+ dfs_pr_warn(cl_hw, "DFS: Match not found\n");
+
+ return 0;
+}
+
+static u8 cl_dfs_get_pulse_type(struct cl_hw *cl_hw, struct cl_dfs_pulse *pulse,
+ bool stag_candidate)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+
+ if (stag_candidate) {
+ u8 pulse_type = cl_dfs_is_stag_pulse(cl_hw, dfs_db, pulse);
+
+ if (pulse_type)
+ return pulse_type;
+ }
+
+ return cl_dfs_is_non_stag_pulse(cl_hw, dfs_db, pulse);
+}
+
+static bool cl_dfs_compare_cand(struct cl_hw *cl_hw, struct cl_dfs_db *dfs_db, u8 pulse_type,
+ struct cl_dfs_pulse radar_cand, u8 *match, int idx,
+ u8 *occ_ch_cand)
+{
+ int i;
+
+ if (!(abs(dfs_db->pulse_buffer[idx].width - radar_cand.width) <=
+ dfs_db->radar_type[pulse_type].tol_width))
+ goto end;
+
+ if (!(abs(dfs_db->pulse_buffer[idx].freq - radar_cand.freq) <=
+ dfs_db->radar_type[pulse_type].tol_freq))
+ goto end;
+
+ for (i = 1; i < CL_DFS_CONCEAL_CNT; i++)
+ if (abs(dfs_db->pulse_buffer[idx].pri - i * radar_cand.pri) <=
+ dfs_db->radar_type[pulse_type].tol_pri)
+ break;
+
+ if (i == CL_DFS_CONCEAL_CNT)
+ goto end;
+
+ (*match)++;
+ (*occ_ch_cand) += dfs_db->pulse_buffer[i].occ;
+
+end:
+ dfs_pr_info(cl_hw, "DFS: compared pulse - width=%u, pri=%u, freq=%u match: %u "
+ "trig cnt: %u\n",
+ dfs_db->pulse_buffer[idx].width, dfs_db->pulse_buffer[idx].pri,
+ dfs_db->pulse_buffer[idx].freq, *match,
+ dfs_db->radar_type[pulse_type].trig_count);
+
+ if (*match < dfs_db->radar_type[pulse_type].trig_count)
+ return false;
+
+ return true;
+}
+
+static bool cl_dfs_check_cand(struct cl_hw *cl_hw, struct cl_dfs_db *dfs_db, u8 pulse_type,
+ struct cl_dfs_pulse radar_cand, u8 samples_cnt)
+{
+ u8 occ_ch_cand = 0;
+ u8 match = 0;
+ int i;
+
+ dfs_pr_info(cl_hw, "DFS: candidate pulse - width=%u, pri=%u, freq=%u\n",
+ radar_cand.width, radar_cand.pri, radar_cand.freq);
+
+ for (i = 0; i < samples_cnt; i++) {
+ if (!cl_dfs_compare_cand(cl_hw, dfs_db, pulse_type, radar_cand, &match, i,
+ &occ_ch_cand))
+ continue;
+
+ dfs_pr_verbose(cl_hw, "DFS: Radar detected - type %u\n", pulse_type);
+
+ return true;
+ }
+
+ return false;
+}
+
+static bool cl_dfs_short_pulse_search(struct cl_hw *cl_hw, struct cl_radar_pulse *pulse,
+ u8 pulse_cnt, unsigned long time, struct cl_dfs_db *dfs_db)
+{
+ int i;
+ bool stag_candidate;
+ u8 samples_cnt = 0;
+ u8 pulse_type;
+
+ /* Return if not enough pulses in the buffer */
+ if (!cl_dfs_buf_maintain(cl_hw, pulse, dfs_db->pulse_buffer, pulse_cnt, time,
+ &samples_cnt, dfs_db))
+ return false;
+
+ for (i = 0; i < samples_cnt; i++) {
+ struct cl_dfs_pulse radar_cand;
+
+ stag_candidate = false;
+
+ /* Make sure there is enough samples to staggered check */
+ if (dfs_db->dfs_standard == NL80211_DFS_ETSI &&
+ (samples_cnt - i) > CL_DFS_STAGGERED_CHEC_LEN)
+ stag_candidate = true;
+
+ pulse_type = cl_dfs_get_pulse_type(cl_hw, &dfs_db->pulse_buffer[i], stag_candidate);
+
+ if (!pulse_type)
+ continue;
+
+ radar_cand.width = dfs_db->pulse_buffer[i].width;
+ radar_cand.pri = dfs_db->pulse_buffer[i].pri;
+ radar_cand.freq = dfs_db->pulse_buffer[i].freq;
+
+ if (cl_dfs_check_cand(cl_hw, dfs_db, pulse_type - 1, radar_cand, samples_cnt))
+ return true;
+ }
+
+ return false;
+}
+
+static bool cl_dfs_long_pulse_search(struct cl_hw *cl_hw, struct cl_radar_pulse *pulse,
+ u8 pulse_cnt, unsigned long time)
+{
+ u32 prev_pulse_time_diff;
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+ int i;
+
+ for (i = 0; i < pulse_cnt; i++) {
+ if (pulse[i].len > CL_DFS_LONG_MIN_WIDTH) {
+ prev_pulse_time_diff = time - dfs_db->last_long_pulse_ts;
+
+ if (pulse[i].rep >= dfs_db->radar_type[5].min_pri &&
+ pulse[i].rep <= dfs_db->radar_type[5].max_pri)
+ dfs_db->long_pri_match_count += 1;
+
+ dfs_pr_info(cl_hw, "DFS: Long pulse search: width = %u, delta_time = %u\n",
+ pulse[i].len, prev_pulse_time_diff);
+
+ if (dfs_db->long_pulse_count == 0 ||
+ (prev_pulse_time_diff >= conf->ci_dfs_long_pulse_min &&
+ prev_pulse_time_diff <= conf->ci_dfs_long_pulse_max)) {
+ dfs_db->long_pulse_count += 1;
+ } else if (prev_pulse_time_diff > min(dfs_db->max_interrupt_diff,
+ conf->ci_dfs_long_pulse_min)) {
+ dfs_db->long_pulse_count = 0;
+ dfs_db->short_pulse_count = 0;
+ dfs_db->long_pri_match_count = 0;
+ }
+ dfs_db->last_long_pulse_ts = time;
+ } else if (pulse[i].len < CL_DFS_LONG_FALSE_WIDTH) {
+ dfs_db->short_pulse_count++;
+
+ if (dfs_db->short_pulse_count > CL_DFS_LONG_FALSE_IND) {
+ dfs_db->long_pulse_count = 0;
+ dfs_db->short_pulse_count = 0;
+ dfs_db->long_pri_match_count = 0;
+
+ dfs_pr_warn(cl_hw, "DFS: Restart long sequence search\n");
+ }
+ }
+ }
+
+ if (dfs_db->long_pulse_count >= dfs_db->radar_type[5].trig_count &&
+ dfs_db->long_pri_match_count >= (dfs_db->radar_type[5].trig_count - 1)) {
+ dfs_db->short_pulse_count = 0;
+ dfs_db->long_pulse_count = 0;
+ dfs_db->long_pri_match_count = 0;
+ return true;
+ } else {
+ return false;
+ }
+}
+
+static bool cl_dfs_post_detection(struct cl_hw *cl_hw)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+
+ /* Make sure firmware sets the DFS registers */
+ cl_radar_flush(cl_hw);
+ cl_msg_tx_set_dfs(cl_hw, false, dfs_db->dfs_standard,
+ cl_hw->conf->ci_dfs_initial_gain, cl_hw->conf->ci_dfs_agc_cd_th);
+
+ ieee80211_radar_detected(cl_hw->hw);
+
+ return true;
+}
+
+bool cl_dfs_pulse_process(struct cl_hw *cl_hw, struct cl_radar_pulse *pulse, u8 pulse_cnt,
+ unsigned long time)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+
+ dfs_db->pulse_cnt += pulse_cnt;
+
+ if (dfs_db->dfs_standard == NL80211_DFS_FCC &&
+ cl_dfs_long_pulse_search(cl_hw, pulse, pulse_cnt, time)) {
+ dfs_pr_verbose(cl_hw, "DFS: Radar detected - long\n");
+ return cl_dfs_post_detection(cl_hw);
+ } else if (cl_dfs_short_pulse_search(cl_hw, pulse, pulse_cnt, time, dfs_db)) {
+ return cl_dfs_post_detection(cl_hw);
+ }
+
+ return false;
+}
+
+static void cl_dfs_set_min_pulse(struct cl_hw *cl_hw)
+{
+ int i;
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+
+ dfs_db->min_pulse_eeq = U8_MAX;
+
+ for (i = 0; i < dfs_db->radar_type_cnt; i++) {
+ if (dfs_db->radar_type[i].trig_count < dfs_db->min_pulse_eeq)
+ dfs_db->min_pulse_eeq = dfs_db->radar_type[i].trig_count;
+ }
+ dfs_db->min_pulse_eeq = max(dfs_db->min_pulse_eeq, (u8)CL_DFS_MIN_PULSE_TRIG);
+}
+
+static void cl_dfs_set_region(struct cl_hw *cl_hw, enum nl80211_dfs_regions std)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+
+ dfs_db->dfs_standard = std;
+
+ if (dfs_db->dfs_standard == NL80211_DFS_FCC) {
+ dfs_db->radar_type = radar_type_fcc;
+ dfs_db->radar_type_cnt = sizeof(radar_type_fcc) / sizeof(struct cl_radar_type);
+ } else {
+ dfs_db->radar_type = radar_type_etsi;
+ dfs_db->radar_type_cnt = sizeof(radar_type_etsi) / sizeof(struct cl_radar_type);
+ }
+}
+
+static void cl_dfs_start_cac(struct cl_dfs_db *db)
+{
+ db->cac.started = true;
+}
+
+static void cl_dfs_end_cac(struct cl_dfs_db *db)
+{
+ db->cac.started = false;
+}
+
+void cl_dfs_radar_listen_start(struct cl_hw *cl_hw)
+{
+ set_bit(CL_DEV_RADAR_LISTEN, &cl_hw->drv_flags);
+
+ cl_dfs_en(cl_hw, true);
+
+ dfs_pr_verbose(cl_hw, "DFS: Started radar listening\n");
+}
+
+void cl_dfs_radar_listen_end(struct cl_hw *cl_hw)
+{
+ clear_bit(CL_DEV_RADAR_LISTEN, &cl_hw->drv_flags);
+
+ cl_dfs_en(cl_hw, false);
+
+ dfs_pr_verbose(cl_hw, "DFS: Ended radar listening\n");
+}
+
+void cl_dfs_force_cac_start(struct cl_hw *cl_hw)
+{
+ bool is_listening = test_bit(CL_DEV_RADAR_LISTEN, &cl_hw->drv_flags);
+
+ cl_dfs_start_cac(&cl_hw->dfs_db);
+
+ /* Reset request state upon completion */
+ cl_dfs_request_cac(cl_hw, false);
+
+ /* Disable all the TX flow - be silent */
+ cl_tx_en(cl_hw, CL_TX_EN_DFS, false);
+
+ /* If for some reason we are still not listening radar, do it */
+ if (unlikely(!is_listening && cl_hw->hw->conf.radar_enabled))
+ cl_dfs_radar_listen_start(cl_hw);
+
+ dfs_pr_verbose(cl_hw, "DFS: CAC started\n");
+}
+
+void cl_dfs_force_cac_end(struct cl_hw *cl_hw)
+{
+ bool is_listening = test_bit(CL_DEV_RADAR_LISTEN, &cl_hw->drv_flags);
+
+ /* Enable all the TX flow */
+ cl_tx_en(cl_hw, CL_TX_EN_DFS, true);
+
+ /*
+ * If for some reason we are still listening and mac80211 does not
+ * require to listen radar - disable it
+ */
+ if (unlikely(is_listening && !cl_hw->hw->conf.radar_enabled))
+ cl_dfs_radar_listen_end(cl_hw);
+
+ cl_dfs_end_cac(&cl_hw->dfs_db);
+
+ dfs_pr_verbose(cl_hw, "DFS: CAC ended\n");
+}
+
+bool __must_check cl_dfs_is_in_cac(struct cl_hw *cl_hw)
+{
+ return cl_hw->dfs_db.cac.started;
+}
+
+bool __must_check cl_dfs_radar_listening(struct cl_hw *cl_hw)
+{
+ return test_bit(CL_DEV_RADAR_LISTEN, &cl_hw->drv_flags);
+}
+
+bool __must_check cl_dfs_requested_cac(struct cl_hw *cl_hw)
+{
+ return cl_hw->dfs_db.cac.requested;
+}
+
+void cl_dfs_request_cac(struct cl_hw *cl_hw, bool should_do)
+{
+ cl_hw->dfs_db.cac.requested = should_do;
+}
+
+static void cl_dfs_edit_tbl(struct cl_hw *cl_hw, u8 row, u8 line, s16 val)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+
+ if (row > dfs_db->radar_type_cnt) {
+ dfs_pr_err(cl_hw, "Invalid row number (%u) [0 - %u]\n", line,
+ dfs_db->radar_type_cnt - 1);
+ return;
+ }
+
+ if (line == 0 || line > CL_DFS_MAX_TBL_LINE) {
+ dfs_pr_err(cl_hw, "Invalid line number (%u) [1 - %u]\n", line,
+ CL_DFS_MAX_TBL_LINE - 1);
+ return;
+ }
+
+ if (line == 1)
+ dfs_db->radar_type[row].min_width = (s32)val;
+ else if (line == 2)
+ dfs_db->radar_type[row].max_width = (s32)val;
+ else if (line == 3)
+ dfs_db->radar_type[row].tol_width = (s32)val;
+ else if (line == 4)
+ dfs_db->radar_type[row].min_pri = (s32)val;
+ else if (line == 5)
+ dfs_db->radar_type[row].max_pri = (s32)val;
+ else if (line == 6)
+ dfs_db->radar_type[row].tol_pri = (s32)val;
+ else if (line == 7)
+ dfs_db->radar_type[row].tol_freq = (s32)val;
+ else if (line == 8)
+ dfs_db->radar_type[row].min_burst = (u8)val;
+ else if (line == 9)
+ dfs_db->radar_type[row].ppb = (u8)val;
+ else if (line == 10)
+ dfs_db->radar_type[row].trig_count = (u8)val;
+ else if (line == 11)
+ dfs_db->radar_type[row].waveform = (enum cl_radar_waveform)val;
+
+ /* Verify if min_pulse_eeq was changed */
+ cl_dfs_set_min_pulse(cl_hw);
+}
+
+static void cl_dfs_tbl_overwrite_set(struct cl_hw *cl_hw)
+{
+ char *tok = NULL;
+ u8 param1 = 0;
+ u8 param2 = 0;
+ s16 param3 = 0;
+ char str[64];
+ char *strp = str;
+
+ if (strlen(cl_hw->conf->ce_dfs_tbl_overwrite) == 0)
+ return;
+
+ snprintf(str, sizeof(str), cl_hw->conf->ce_dfs_tbl_overwrite);
+
+ tok = strsep(&strp, ";");
+ while (tok) {
+ if (sscanf(tok, "%hhd,%hhd,%hd", &param1, &param2, &param3) == 3)
+ cl_dfs_edit_tbl(cl_hw, param1, param2, param3);
+ tok = strsep(&strp, ";");
+ }
+}
+
+void cl_dfs_init(struct cl_hw *cl_hw)
+{
+ struct cl_dfs_db *dfs_db = &cl_hw->dfs_db;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+
+ if (!cl_band_is_5g(cl_hw))
+ return;
+
+ dfs_db->en = conf->ci_ieee80211h;
+
+ cl_hw->dfs_db.dbg_lvl = DBG_LVL_ERROR;
+
+ /*
+ * Setting min window size to avoid the case where the second interrupt
+ * within the burst is setting the counter to 0. the max is between jiffies
+ * unit and max PRI in ms.
+ */
+ dfs_db->max_interrupt_diff = max(1000 / HZ, 2);
+
+ cl_dfs_set_region(cl_hw, cl_hw->channel_info.standard);
+ dfs_db->search_window = CL_DFS_PULSE_WINDOW;
+
+ cl_dfs_set_min_pulse(cl_hw);
+ cl_dfs_tbl_overwrite_set(cl_hw);
+}
+
+void cl_dfs_reinit(struct cl_hw *cl_hw)
+{
+ cl_dfs_init(cl_hw);
+}
+
+void cl_dfs_recovery(struct cl_hw *cl_hw)
+{
+ /* Re-enable DFS after recovery */
+ if (cl_dfs_is_in_cac(cl_hw)) {
+ cl_dfs_en(cl_hw, true);
+
+ /* If recovery happened during CAC make sure to disable beacon backup */
+ cl_tx_en(cl_hw, CL_TX_EN_DFS, false);
+ }
+}
+
+static bool cl_radar_handler(struct cl_hw *cl_hw, struct cl_radar_elem *radar_elem,
+ unsigned long time)
+{
+ /* Retrieve the radar pulses structure */
+ struct cl_radar_pulse_array *pulses = radar_elem->radarbuf_ptr;
+
+ cl_dfs_pulse_process(cl_hw, (struct cl_radar_pulse *)pulses->pulse, pulses->cnt, time);
+
+ return false;
+}
+
+static void cl_radar_tasklet(unsigned long data)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)data;
+ struct cl_radar_queue_elem *radar_elem = NULL;
+ unsigned long flags = 0;
+ bool radar_stat = false;
+
+ while (!list_empty(&cl_hw->radar_queue.head)) {
+ spin_lock_irqsave(&cl_hw->radar_queue.lock, flags);
+ radar_elem = list_first_entry(&cl_hw->radar_queue.head,
+ struct cl_radar_queue_elem, list);
+ list_del(&radar_elem->list);
+ spin_unlock_irqrestore(&cl_hw->radar_queue.lock, flags);
+
+ radar_stat = cl_radar_handler(radar_elem->cl_hw, &radar_elem->radar_elem,
+ radar_elem->time);
+
+ kfree(radar_elem->radar_elem.radarbuf_ptr);
+ kfree(radar_elem);
+ }
+
+ if (!test_bit(CL_DEV_STOP_HW, &cl_hw->drv_flags))
+ if (!radar_stat)
+ cl_irq_enable(cl_hw, cl_hw->ipc_e2a_irq.radar);
+}
+
+void cl_radar_init(struct cl_hw *cl_hw)
+{
+ INIT_LIST_HEAD(&cl_hw->radar_queue.head);
+
+ tasklet_init(&cl_hw->radar_tasklet, cl_radar_tasklet, (unsigned long)cl_hw);
+
+ spin_lock_init(&cl_hw->radar_queue.lock);
+}
+
+void cl_radar_push(struct cl_hw *cl_hw, struct cl_radar_elem *radar_elem)
+{
+ struct cl_radar_queue_elem *new_queue_elem = NULL;
+ u32 i;
+
+ new_queue_elem = kmalloc(sizeof(*new_queue_elem), GFP_ATOMIC);
+
+ if (new_queue_elem) {
+ new_queue_elem->radar_elem.radarbuf_ptr =
+ kmalloc(sizeof(*new_queue_elem->radar_elem.radarbuf_ptr), GFP_ATOMIC);
+
+ if (new_queue_elem->radar_elem.radarbuf_ptr) {
+ new_queue_elem->radar_elem.dma_addr = radar_elem->dma_addr;
+ new_queue_elem->radar_elem.radarbuf_ptr->cnt =
+ radar_elem->radarbuf_ptr->cnt;
+
+ /* Copy into local list */
+ for (i = 0; i < RADAR_PULSE_MAX; i++)
+ new_queue_elem->radar_elem.radarbuf_ptr->pulse[i] =
+ radar_elem->radarbuf_ptr->pulse[i];
+
+ new_queue_elem->time = jiffies_to_msecs(jiffies);
+ new_queue_elem->cl_hw = cl_hw;
+
+ spin_lock(&cl_hw->radar_queue.lock);
+ list_add_tail(&new_queue_elem->list, &cl_hw->radar_queue.head);
+ spin_unlock(&cl_hw->radar_queue.lock);
+ } else {
+ kfree(new_queue_elem);
+ }
+ }
+}
+
+void cl_radar_tasklet_schedule(struct cl_hw *cl_hw)
+{
+ tasklet_schedule(&cl_hw->radar_tasklet);
+}
+
+void cl_radar_flush(struct cl_hw *cl_hw)
+{
+ struct cl_radar_queue_elem *radar_elem = NULL;
+ unsigned long flags = 0;
+
+ spin_lock_irqsave(&cl_hw->radar_queue.lock, flags);
+
+ while (!list_empty(&cl_hw->radar_queue.head)) {
+ radar_elem = list_first_entry(&cl_hw->radar_queue.head,
+ struct cl_radar_queue_elem, list);
+ list_del(&radar_elem->list);
+ kfree(radar_elem->radar_elem.radarbuf_ptr);
+ kfree(radar_elem);
+ }
+
+ spin_unlock_irqrestore(&cl_hw->radar_queue.lock, flags);
+}
+
+void cl_radar_close(struct cl_hw *cl_hw)
+{
+ cl_radar_flush(cl_hw);
+ tasklet_kill(&cl_hw->radar_tasklet);
+}
+
--
2.36.1


2022-05-25 07:59:54

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 35/96] cl8k: add ipc_shared.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/ipc_shared.h | 1386 +++++++++++++++++
1 file changed, 1386 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/ipc_shared.h

diff --git a/drivers/net/wireless/celeno/cl8k/ipc_shared.h b/drivers/net/wireless/celeno/cl8k/ipc_shared.h
new file mode 100644
index 000000000000..b8560bc632c7
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/ipc_shared.h
@@ -0,0 +1,1386 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_IPC_SHARED_H
+#define CL_IPC_SHARED_H
+
+#include <net/mac80211.h>
+
+#include "def.h"
+
+/** DOC: IPC - introduction
+ *
+ * IPC layer between the FW (XMAC -> LMAC, SMAC, UMAC) and the driver. Driver
+ * talks with lower layer via custom IPC messages and DMA. Basically, drv <->
+ * fw messages flow consists of %cl_fw_msg, that contains info about direction,
+ * message id (which is enum field of %mm_msg_tag or %dbg_msg_tag) and payload
+ * itself.
+ *
+ * Messages may be synchronous (with the confirmation feedback) and
+ * asynchronous. The latter is typically being used as inidication of
+ * occurrence of some event.
+ *
+ * Driver LMAC/SMAC
+ * + +
+ * | AA_REQ |
+ * |-------------------->|...+ (Request)
+ * | AA_CFM | | Mandatory control messages
+ * |<--------------------|...+ (Confirmation)
+ * | |
+ * . BB_IND .
+ * |<--------------------|... Asynchronous indication
+ *
+ * Messages are using prelocated buffers and size limit of %IPC_RXBUF_SIZE.
+ * Each message may have verification pattern, that allows to verify the
+ * validity of payload. Most important TX/RX flow operations are tracked and
+ * are reflected by stats change (like &cl_rx_path_info structure).
+ */
+
+/*
+ * Number of Host buffers available for Data Rx handling (through DMA)
+ * Must correspond to FW code definition, and must be power of 2.
+ */
+#define IPC_RXBUF_CNT_RXM 2048
+#define IPC_RXBUF_CNT_FW 128
+
+/* Bucket debug */
+#define IPC_RXBUF_BUCKET_POW_SIZE 5
+#define IPC_RXBUF_BUCKET_SIZE BIT(IPC_RXBUF_BUCKET_POW_SIZE) /* 2 ^ 5 = 32 */
+#define IPC_RXBUF_NUM_BUCKETS_RXM (IPC_RXBUF_CNT_RXM / IPC_RXBUF_BUCKET_SIZE)
+#define IPC_RXBUF_NUM_BUCKETS_FW (IPC_RXBUF_CNT_FW / IPC_RXBUF_BUCKET_SIZE)
+
+#define MU_MAX_STREAMS 8
+#define MU_MAX_SECONDARIES (MU_MAX_STREAMS - 1)
+
+#define CL_MU0_IDX 0
+#define CL_MU1_IDX 1
+#define CL_MU2_IDX 2
+#define CL_MU3_IDX 3
+#define CL_MU4_IDX 4
+#define CL_MU5_IDX 5
+#define CL_MU6_IDX 6
+#define CL_MU7_IDX 7
+#define CL_MU_IDX_MAX CL_MU7_IDX
+
+#define IPC_TX_QUEUE_CNT 5
+
+#define IPC_MAX_BA_SESSIONS 128
+
+#if IPC_MAX_BA_SESSIONS > CL_MAX_NUM_STA
+#define IPC_MAX_TIM_TX_OR_RX_AGG_SIZE IPC_MAX_BA_SESSIONS
+#else
+#define IPC_MAX_TIM_TX_OR_RX_AGG_SIZE CL_MAX_NUM_STA
+#endif
+
+#define IPC_QUEUE_IDX_DIFF_ARRAY_SIZE 6
+
+#define IPC_TIM_AGG_SIZE (IPC_MAX_TIM_TX_OR_RX_AGG_SIZE * 2)
+
+#define IPC_TX_QUEUE_IDX_TO_COMMON_QUEUE_IDX(idx) ((idx) * 2)
+
+#define IPC_RX_QUEUE_IDX_TO_COMMON_QUEUE_IDX(idx) (((idx) * 2) + 1)
+
+#define CL_MAX_BA_PHYSICAL_QUEUE_CNT (AC_MAX + MU_MAX_SECONDARIES)
+#define CE_AC_MAX (IPC_TX_QUEUE_CNT + MU_MAX_SECONDARIES)
+
+enum {
+ AGG_AC0_IDX = AC_BK,
+ AGG_AC1_IDX = AC_BE,
+ AGG_AC2_IDX = AC_VI,
+ AGG_AC3_IDX = AC_VO,
+ AGG_MU1_IDX,
+ AGG_MU2_IDX,
+ AGG_MU3_IDX,
+ AGG_MU4_IDX,
+ AGG_MU5_IDX,
+ AGG_MU6_IDX,
+ AGG_MU7_IDX,
+ AGG_IDX_MAX,
+};
+
+#define DBG_DUMP_BUFFER_SIZE (1024 * 40)
+
+#define IPC_TXDESC_CNT_SINGLE 16
+#define IPC_TXDESC_CNT_BCMC 16
+
+/* Max count of Tx MSDU in A-MSDU */
+#define CL_AMSDU_TX_PAYLOAD_MAX 4
+
+#define TXDESC_AGG_Q_SIZE_MAX 512
+
+#define CL_MAX_AGG_IN_TXOP 20
+
+/* Keep LMAC & SMAC debug agg stats arrays size aligned */
+#define DBG_STATS_MAX_AGG_SIZE (256 + 1)
+
+/* Must be power of 2 */
+#define IPC_CFM_CNT 4096
+
+#define IPC_CFM_SIZE (IPC_CFM_CNT * sizeof(struct cl_ipc_cfm_msg))
+
+/* Number of rates in Policy table */
+#define CL_RATE_CONTROL_STEPS 4
+
+/*
+ * Stringified DRV/FW versions should be small enough to fit related ethtool
+ * descriptors size (32)
+ */
+#define CL_VERSION_STR_SIZE 32
+
+#if (IPC_CFM_CNT & (IPC_CFM_CNT - 1))
+#error "IPC_CFM_CNT Not a power of 2"
+#endif
+
+/*
+ * the calculation is conducted as follow:
+ * 1500 - max ethernet frame
+ * conversion of ETH to MSDU:
+ * 1500[eth max] - 12[hdr frame] + 14[msdu frame] + 8[llc snap] + 4[MSDU Padding] = 1514
+ * MSDU + WLAN HDR = 1514[MSDU MAX] + 36[MAX WLAN HDR] = 1550
+ * 2 bytes is being PADDED by SKB alloc for alignment.
+ * 18 byte encryption
+ * sizeof(struct hw_rxhdr)
+ */
+#define IPC_RXBUF_SIZE (1570 + sizeof(struct hw_rxhdr))
+
+/* Number of available host buffers */
+#define IPC_RADAR_BUF_CNT 32
+#define IPC_E2A_MSG_BUF_CNT 128
+#define IPC_DBG_BUF_CNT 64
+
+/* Length used in MSGs structures (size in 4-byte words) */
+#define IPC_A2E_MSG_BUF_SIZE 255
+#define IPC_E2A_MSG_PARAM_SIZE 63
+
+/* Debug messages buffers size (in bytes) */
+#define IPC_DBG_PARAM_SIZE 256
+
+/* Pattern indication for validity */
+#define IPC_RX_DMA_OVER_PATTERN 0xAAAAAA00
+#define IPC_E2A_MSG_VALID_PATTERN 0xADDEDE2A
+#define IPC_DBG_VALID_PATTERN 0x000CACA0
+#define IPC_EXCEPTION_PATTERN 0xDEADDEAD
+
+#define HB_POOL_DMA_DESCS_NUM 2
+
+/* Tensilica backtrace depth */
+#define IPC_BACKTRACT_DEPTH 5
+
+/* Maximum length of the SW diag trace */
+#define DBG_SW_DIAG_MAX_LEN 1024
+
+/* Maximum length of the error trace */
+#define DBG_ERROR_TRACE_SIZE 256
+
+/* Number of MAC diagnostic port banks */
+#define DBG_DIAGS_MAC_MAX 48
+
+/* Driver mem size used for THDs PTs & PBDs */
+#define DBG_THD_CHAINS_INFO_THD_CNT 5
+#define DBG_THD_CHAINS_INFO_PBD_CNT 9
+#define DBG_THD_CHAINS_INFO_PT_CNT 1
+#define DBG_THD_CHAINS_INFO_ARRAY_SIZE \
+ ((DBG_THD_CHAINS_INFO_THD_CNT * sizeof(struct tx_hd)) + \
+ (DBG_THD_CHAINS_INFO_PBD_CNT * sizeof(struct tx_pbd)) + \
+ (DBG_THD_CHAINS_INFO_PT_CNT * sizeof(struct tx_policy_tbl)))
+
+#define DBG_CHAINS_INFO_ELEM_CNT 10
+
+/* Txl chain info - per ac */
+#define DBG_TXL_FRAME_EXCH_TRACE_DEPTH 5
+
+/* FW debug trace size */
+#define DBG_FW_TRACE_SIZE 30
+#define DBG_FW_TRACE_STR_MAX 20
+
+/* Number of embedded logic analyzers */
+#define LA_CNT 1
+
+/* Length of the configuration data of a logic analyzer */
+#define LA_CONF_LEN 102
+
+/* Structure containing the configuration data of a logic analyzer */
+struct la_conf_tag {
+ u32 conf[LA_CONF_LEN];
+};
+
+/* Size of a logic analyzer memory */
+#define LA_MEM_LEN (256 * 1024)
+
+/* Message structure for MSGs from Emb to App */
+struct cl_ipc_e2a_msg {
+ __le16 id;
+ __le16 dummy_dest_id;
+ __le16 dummy_src_id;
+ __le16 param_len;
+ u32 param[IPC_E2A_MSG_PARAM_SIZE];
+ __le32 pattern;
+};
+
+enum rx_buf_type {
+ CL_RX_BUF_RXM,
+ CL_RX_BUF_FW,
+ CL_RX_BUF_MAX
+};
+
+/*
+ * Structs & function associated with HW & SW debug data
+ * The Debug information forwarded to host when an error occurs, and printed to stdout
+ * This data must be consistent with firmware, any new debug data should exist also in
+ * firmware side
+ */
+
+struct tx_hd {
+ u32 upatterntx;
+ u32 nextfrmexseq_ptr;
+ u32 nextmpdudesc_ptr;
+ u32 first_pbd_ptr;
+ u32 datastartptr;
+ u32 dataendptr;
+ u32 frmlen;
+ u32 spacinginfo;
+ u32 phyctrlinfo1;
+ u32 policyentryaddr;
+ u32 bar_thd_desc_ptr;
+ u32 reserved1;
+ u32 macctrlinfo1;
+ u32 macctrlinfo2;
+ u32 statinfo;
+ u32 phyctrlinfo2;
+};
+
+struct tx_policy_tbl {
+ u32 upatterntx;
+ u32 phycntrlinfo1;
+ u32 phycntrlinfo2;
+ u32 maccntrlinfo1;
+ u32 maccntrlinfo2;
+ u32 ratecntrlinfo[CL_RATE_CONTROL_STEPS];
+ u32 phycntrlinfo3;
+ u32 phycntrlinfo4;
+ u32 phycntrlinfo5;
+ u32 stationinfo;
+ u32 ratecntrlinfohe[CL_RATE_CONTROL_STEPS];
+ u32 maccntrlinfo3;
+ u32 triggercommoninfo;
+ u32 triggerinforuallocationu0u3;
+ u32 triggerinforuallocationu4u7;
+ u32 triggerperuserinfo[MU_MAX_STREAMS];
+};
+
+struct tx_pbd {
+ u32 upatterntx;
+ u32 next;
+ u32 datastartptr;
+ u32 dataendptr;
+ u32 bufctrlinfo;
+};
+
+enum cl_macfw_dbg_severity {
+ CL_MACFW_DBG_SEV_NONE,
+ CL_MACFW_DBG_SEV_ERROR,
+ CL_MACFW_DBG_SEV_WARNING,
+ CL_MACFW_DBG_SEV_INFO,
+ CL_MACFW_DBG_SEV_VERBOSE,
+
+ CL_MACFW_DBG_SEV_MAX
+};
+
+struct phy_channel_info {
+ __le32 info1;
+ __le32 info2;
+};
+
+struct dbg_debug_info_tag {
+ u32 error_type; /* (0: recoverable, 1: fatal) */
+ u32 hw_diag;
+ char error[DBG_ERROR_TRACE_SIZE];
+ u32 sw_diag_len;
+ char sw_diag[DBG_SW_DIAG_MAX_LEN];
+ struct phy_channel_info chan_info;
+ struct la_conf_tag la_conf[LA_CNT];
+ u16 diags_mac[DBG_DIAGS_MAC_MAX];
+};
+
+/*
+ * Defines, enums and structs below are used at TX statistics
+ * structure.
+ * Because of the TX statistics structure should be same at
+ * host and at firmware,the change of these parameters requires
+ * similar firmware changes
+ */
+
+struct cl_txl_ba_statistics {
+ u32 total_cnt;
+ u32 total_rtx_cnt;
+ u16 total_ba_received;
+ u16 total_ba_not_received_cnt;
+ u16 total_lifetime_expired_cnt;
+ u16 total_rtx_limit_reached;
+ u16 total_packets_below_baw;
+ u16 total_packets_above_baw;
+ u16 total_ba_not_received_cnt_ps;
+ u16 total_cleard_ba;
+ u16 total_unexpected_ba;
+ u16 total_invalid_ba;
+ u16 total_ack_instead_ba;
+};
+
+struct cl_txl_single_statistics {
+ u32 total_cnt;
+ u32 total_rtx_cnt;
+ u16 total_lifetime_expired_cnt;
+ u16 total_rtx_limit_reached;
+ u16 total_rtx_limit_reached_ps;
+};
+
+enum {
+ CE_BACKOFF_25,
+ CE_BACKOFF_50,
+ CE_BACKOFF_100,
+ CE_BACKOFF_500,
+ CE_BACKOFF_1000,
+ CE_BACKOFF_5000,
+ CE_BACKOFF_10000,
+ CE_BACKOFF_20000,
+ CE_BACKOFF_20000_ABOVE,
+ CE_BACKOFF_MAX
+};
+
+struct cl_txl_backoff_statistics {
+ u32 chain_time;
+ u32 backoff_hist[CE_BACKOFF_MAX];
+};
+
+struct cl_txl_tid_statistics {
+ u32 total_tid_desc_cnt;
+};
+
+/* Natt closed an aggregation due to one of the bellow reasons. */
+enum {
+ NATT_REASON_MAX_LEN = 0x1,
+ NATT_REASON_TXOP_LIMIT = 0x2,
+ NATT_REASON_MPDU_NUM = 0x4,
+ NATT_REASON_LAST_BIT = 0x8,
+ NATT_REASON_MAX
+};
+
+/* Tx BW */
+enum {
+ NATT_BW_20,
+ NATT_BW_40,
+ NATT_BW_80,
+ NATT_BW_160,
+ NATT_BW_MAX
+};
+
+struct cl_txl_natt_statistics {
+ u32 agg_close_reason[NATT_REASON_MAX];
+ u32 chosen_frame_bw[NATT_BW_MAX];
+ u32 operation_mode[8];
+};
+
+enum {
+ AGG_IN_TXOP_CLOSE_REASON_NO_TXDESC,
+ AGG_IN_TXOP_CLOSE_REASON_TXOP_EXPIRED,
+ AGG_IN_TXOP_CLOSE_REASON_ACTIVE_DELBA,
+ AGG_IN_TXOP_CLOSE_REASON_MAX
+};
+
+struct amsdu_stat {
+ u16 packet_cnt_2;
+ u16 packet_cnt_3;
+ u16 packet_cnt_4;
+ u16 packet_cnt_5_or_more;
+};
+
+struct cl_txl_mu_statistics {
+ u16 chain_cnt;
+ u16 status_cnt;
+ u16 ba_received;
+ u16 ba_no_received;
+ u16 clear_ba;
+ u16 correct_ba;
+ u16 unexpected_ba;
+ u16 invalid_ba;
+};
+
+struct cl_txl_agg_statistics {
+ u16 agg_size_statistics[DBG_STATS_MAX_AGG_SIZE];
+ u32 packet_failed_statistics[DBG_STATS_MAX_AGG_SIZE];
+ u32 packet_passed_statistics[DBG_STATS_MAX_AGG_SIZE];
+ u16 htp_agg_size_statistics[DBG_STATS_MAX_AGG_SIZE];
+ u32 htp_packet_failed_statistics[DBG_STATS_MAX_AGG_SIZE];
+ u32 htp_packet_passed_statistics[DBG_STATS_MAX_AGG_SIZE];
+ u16 agg_in_txop_statistics[CL_MAX_AGG_IN_TXOP];
+ u16 agg_in_txop_close_reason[AGG_IN_TXOP_CLOSE_REASON_MAX];
+ u16 agg_in_txop_queue_switch;
+ u16 agg_in_txop_queue_switch_abort_bw;
+ struct amsdu_stat amsdu_stat[IPC_MAX_BA_SESSIONS];
+ u16 mu_agg_size_statistics[MU_MAX_STREAMS][DBG_STATS_MAX_AGG_SIZE];
+ struct cl_txl_mu_statistics mu_stats[MU_MAX_STREAMS];
+};
+
+struct cl_txl_ac_statistics {
+ u32 total_q_switch_cnt;
+ u32 total_ac_desc_cnt;
+};
+
+struct cl_txl_underrun_statistics {
+ u16 length_cnt;
+ u16 pattern_cnt;
+ u16 flushed_frames_cnt;
+};
+
+struct cl_txl_rts_cts_statistics {
+ u32 fw_rts_cnt;
+ u32 fw_rts_retry_cnt;
+ u32 fw_rts_retry_limit_cnt;
+ u32 hw_rts_cnt;
+ u32 fw_cts_cnt;
+ u32 hw_cts_cnt;
+};
+
+struct cl_txl_backoff_params {
+ u32 singelton_total[AC_MAX];
+ u32 singelton_cnt[AC_MAX];
+ u32 agg_total[AC_MAX];
+ u32 agg_cnt[AC_MAX];
+};
+
+struct cl_txl_htp_statistics {
+ u32 total_cnt[TID_MAX];
+ u32 need_response;
+ u32 tb_response_required;
+ u32 ac_not_found;
+ u32 end_of_packet_int;
+ u32 tb_bw_decision;
+ u32 tb_ba_thd_removed;
+ u32 tb_ac_unchain;
+ u32 tb_htp_unchain;
+ u32 tb_dummy_htp_tx;
+ u32 tb_dummy_no_tx;
+ u32 msta_ba_received;
+ u32 msta_ba_aid_not_found;
+ u32 uora_cnt_trigger_frame_tx;
+ u32 uora_cnt_trigger_frame_rx;
+ u32 uora_cnt_probe_req_tx;
+ u32 uora_cnt_probe_req_rx;
+};
+
+struct cl_txl_vns_statistics {
+ u32 off_he;
+ u32 off_ht_vht;
+ u32 off_ofdm;
+ u32 off_cck;
+ u32 on_he;
+ u32 on_ht_vht;
+ u32 on_ofdm;
+ u32 on_cck;
+};
+
+struct cl_txl_fec_statistics {
+ u32 ldpc;
+ u32 bcc;
+};
+
+struct cl_txl_mu_desision_statistics {
+ u32 num_sta_in_mu_group[MU_MAX_STREAMS];
+ u32 mu_tx_active;
+ u32 prim_not_in_mu_group;
+ u32 prim_rate_invalid;
+ u32 other_reason;
+ u32 total_num_su;
+ u32 is_2nd_rate_invalid;
+ u32 is_2nd_awake;
+ u32 is_2nd_enouhg_data;
+};
+
+struct cl_txl_statistics {
+ u32 type; /* This field should be first in the struct */
+ u32 recovery_count;
+ u32 tx_obtain_bw_fail_cnt;
+ struct cl_txl_single_statistics single[IPC_TX_QUEUE_CNT];
+ struct cl_txl_ba_statistics ba[IPC_MAX_BA_SESSIONS];
+ struct cl_txl_backoff_statistics backoff_stats[AC_MAX];
+ struct cl_txl_natt_statistics natt;
+ struct cl_txl_tid_statistics tid[TID_MAX];
+ struct cl_txl_agg_statistics agg;
+ struct cl_txl_ac_statistics ac[CE_AC_MAX];
+ struct cl_txl_underrun_statistics underrun;
+ struct cl_txl_rts_cts_statistics rts_cts;
+ struct cl_txl_backoff_params backoff_params;
+ struct cl_txl_htp_statistics htp;
+ struct cl_txl_vns_statistics vns;
+ struct cl_txl_fec_statistics fec;
+ struct cl_txl_mu_desision_statistics mu_desision;
+};
+
+/* Flushed beacon list options */
+enum {
+ BCN_FLUSH_PENDING,
+ BCN_FLUSH_DOWNLOADING,
+ BCN_FLUSH_TRANSMITTING,
+ BCN_FLUSH_MAX,
+};
+
+struct bcn_backup_stats {
+ u32 bcn_backup_used_cnt;
+ u32 bcn_backup_tx_cnt;
+ u32 bcn_backup_flushed_cnt;
+ u32 bcn_backup_used_in_arow_cnt;
+ u32 bcn_backup_max_used_in_arow_cnt;
+};
+
+struct beacon_timing {
+ /* Time measurements between beacons */
+ u32 last_bcn_start_time;
+ u32 max_time_from_last_bcn;
+ u32 min_time_from_last_bcn;
+ u32 total_bcn_time;
+ /* Time measurements until beacon chained */
+ u32 imp_tbtt_start_time;
+ u32 bcn_chain_total_time;
+ u32 bcn_chain_max_time;
+ u32 bcn_chain_min_time;
+ /* Time measurements until received beacon from host */
+ u32 bcn_last_request_time;
+ u32 max_bcn_time_until_get_beacon_from_driver_in_tbtt;
+ u32 bcn_last_req_rec_time;
+ /* Time measurements of bcn from pending to chain */
+ u32 bcn_push_pending_start_time;
+ u32 bcn_pending_2_chain_max_time;
+};
+
+struct beacon_counters {
+ u32 bcn_time_from_driver_not_in_threshold_cnt;
+ u32 nof_time_intervals_between_beacons;
+ u32 total_cnt;
+ u32 bcn_chain_total_cnt;
+ u32 ce_txl_flushed_beacons[BCN_FLUSH_MAX];
+ u32 pending2chain_not_in_threshold_cnt;
+ u32 total_beacons_received_from_driver;
+};
+
+struct cl_bcn_statistics {
+ u32 type; /* This field should be first in the struct */
+ struct beacon_counters beacon_counters;
+ struct beacon_timing beacon_timing;
+ struct bcn_backup_stats bcn_backup_stats;
+};
+
+struct cl_queue_idx_dif_stats {
+ u32 type; /* This field should be first in the struct */
+ u32 diff_array_count[IPC_QUEUE_IDX_DIFF_ARRAY_SIZE];
+ u32 last_diff;
+ u32 wr_idx;
+ u32 rd_idx;
+};
+
+enum agg_tx_rate_drop_reason {
+ AGG_TX_RATE_DROP_MAX_BA_NOT_RECEIVED_REACHED,
+ AGG_TX_RATE_DROP_MAX_BA_PER_REACHED,
+ AGG_TX_RATE_DROP_MAX_RETRY_REACHED,
+ AGG_TX_RATE_DROP_MAX,
+};
+
+struct cl_rate_drop_statistics {
+ u32 type;
+ u32 drop_reason[AGG_TX_RATE_DROP_MAX];
+};
+
+#define BF_DB_MAX 16
+
+enum bfr_rx_err {
+ BFR_RX_ERR_BW_MISMATCH,
+ BFR_RX_ERR_NSS_MISMATCH,
+ BFR_RX_ERR_SOUNDING_CHBW,
+ BFR_RX_ERR_TOKEN_MISMATCH,
+ BFR_RX_ERR_NDP_DROP,
+ BFR_SEGMENTED_DROP,
+ BFR_RX_ERR_MISS_ACK,
+ BFR_RX_ERR_RESOURCE_NA,
+ BFR_RX_ERR_MAX
+};
+
+enum TX_BF_DATA_STAT {
+ TX_BF_DATA_OK = 0,
+ TX_BF_DATA_BUFFERED_RESOURCE_ERR,
+ TX_BF_DATA_RELEASED_RESOURCE_ERR,
+ TX_BF_DATA_BUFFERED_PS_STA,
+ TX_BF_DATA_RELEASED_PS_STA,
+ TX_BF_DATA_ERR_BFR_MISS,
+ TX_BF_DATA_ERR_BFR_OUTDATED,
+ TX_BF_DATA_ERR_MISMATCH_BW,
+ TX_BF_DATA_ERR_MISMATCH_NSS,
+ TX_BF_DATA_ERR_MAX
+};
+
+struct bf_ctrl_dbg {
+ u16 ndp_cnt;
+ u16 bfp_cnt;
+ u16 su_bfr_cnt;
+ u16 mu_bfr_cnt;
+ u16 bf_invalid_cnt[BFR_RX_ERR_MAX];
+ u16 tx_bf_data_err[TX_BF_DATA_ERR_MAX];
+};
+
+struct bf_stats_database {
+ bool is_active_list;
+ struct bf_ctrl_dbg dbg;
+ u8 sta_idx;
+ u16 active_dsp_idx;
+ u16 passive_dsp_idx;
+};
+
+struct cl_bf_statistics {
+ u32 type;
+ bool print_active_free_list;
+ struct bf_stats_database stats_data[BF_DB_MAX];
+};
+
+enum amsdu_deaggregation_err {
+ AMSDU_DEAGGREGATION_ERR_MAX_MSDU_REACH,
+ AMSDU_DEAGGREGATION_ERR_MSDU_GT_AMSDU_LEN,
+ AMSDU_DEAGGREGATION_ERR_MSDU_LEN,
+ AMSDU_ENCRYPTION_KEY_GET_ERR,
+
+ AMSDU_DEAGGREGATION_ERR_MAX
+};
+
+enum emb_ll1_handled_frm_type {
+ BA_FRM_TYPE,
+ NDPA_FRM_TYPE,
+ NDP_FRM_TYPE,
+ ACTION_NO_ACK_FRM_TYPE,
+
+ MAX_HANDLED_FRM_TYPE
+};
+
+enum rhd_decr_idx {
+ RHD_DECR_UNENC_IDX,
+ RHD_DECR_ICVFAIL_IDX,
+ RHD_DECR_CCMPFAIL_IDX,
+ RHD_DECR_AMSDUDISCARD_IDX,
+ RHD_DECR_NULLKEY_IDX,
+ RHD_DECR_IDX_MAX
+};
+
+#define RX_CLASSIFICATION_MAX 6
+#define FREQ_OFFSET_TABLE_IDX_MAX 8 /* Must be a power of 2 */
+#define RX_MAX_MSDU_IN_SINGLE_AMSDU 16
+
+struct cl_rxl_statistics {
+ u32 type; /* This field should be first in the struct */
+ u32 rx_imp_bf_counter[MU_UL_MAX];
+ u32 rx_imp_bf_int_counter[MU_UL_MAX];
+ u32 rx_class_counter[MU_UL_MAX][RX_CLASSIFICATION_MAX];
+ u32 rx_class_int_counter[MU_UL_MAX];
+ u32 counter_timer_trigger_ll1[MU_UL_MAX];
+ u32 counter_timer_trigger_ll2[MU_UL_MAX];
+ u32 total_rx_packets[MU_UL_MAX];
+ u32 total_agg_packets[MU_UL_MAX];
+ u32 rx_fifo_overflow_err_cnt[MU_UL_MAX];
+ u32 rx_dma_discard_cnt;
+ u32 host_rxelem_not_ready_cnt;
+ u32 msdu_host_rxelem_not_ready_cnt;
+ u32 dma_rx_pool_not_ready_cnt;
+ u32 rx_pckt_exceed_max_len_cnt[MU_UL_MAX];
+ u32 rx_pckt_bad_ba_statinfo_cnt[MU_UL_MAX];
+ u32 nav_value[MU_UL_MAX];
+ u16 max_mpdu_data_len[MU_UL_MAX];
+ u8 rhd_ll2_max_cnt[MU_UL_MAX]; /* Rhd first list */
+ u8 rhd_ll1_max_cnt[MU_UL_MAX]; /* Rhd second list */
+ u8 cca_busy_percent;
+ u8 rx_mine_busy_percent;
+ u8 tx_mine_busy_percent;
+ u8 sample_cnt;
+ /* Emb handled frames */
+ u32 emb_ll1_handled_frame_counter[MU_UL_MAX][MAX_HANDLED_FRM_TYPE];
+ u32 rxm_stats_overflow[MU_UL_MAX];
+ /* RX AMSDU statistics counters */
+ u32 stats_tot_rx_amsdu_cnt[MU_UL_MAX];
+ u32 stats_rx_amsdu_cnt[MU_UL_MAX][RX_MAX_MSDU_IN_SINGLE_AMSDU];
+ u32 stats_rx_amsdu_err[MU_UL_MAX][AMSDU_DEAGGREGATION_ERR_MAX];
+ u32 stats_rx_format[FORMATMOD_MAX];
+ /* RX decryption error */
+ u32 decrypt_err[RHD_DECR_IDX_MAX];
+ u32 rx_incorrect_format_mode[MU_UL_MAX];
+ u32 fcs_error_counter[MU_UL_MAX];
+ u32 phy_error_counter[MU_UL_MAX];
+ u32 ampdu_received_counter[MU_UL_MAX];
+ u32 delimiter_error_counter[MU_UL_MAX];
+ u32 ampdu_incorrect_received_counter[MU_UL_MAX];
+ u32 correct_received_mpdu[MU_UL_MAX];
+ u32 incorrect_received_mpdu[MU_UL_MAX];
+ u32 discarded_mpdu[MU_UL_MAX];
+ u32 incorrect_delimiter[MU_UL_MAX];
+ u32 rxm_mpdu_cnt[MU_UL_MAX];
+ u32 rxm_rule0_match[MU_UL_MAX];
+ u32 rxm_rule1_match[MU_UL_MAX];
+ u32 rxm_rule2_match[MU_UL_MAX];
+ u32 rxm_rule3_match[MU_UL_MAX];
+ u32 rxm_rule4_match[MU_UL_MAX];
+ u32 rxm_rule5_match[MU_UL_MAX];
+ u32 rxm_rule6_match[MU_UL_MAX];
+ u32 rxm_default_rule_match[MU_UL_MAX];
+ u32 rxm_amsdu_1[MU_UL_MAX];
+ u32 rxm_amsdu_2[MU_UL_MAX];
+ u32 rxm_amsdu_3[MU_UL_MAX];
+ u32 rxm_amsdu_4[MU_UL_MAX];
+ u32 rxm_amsdu_5_15[MU_UL_MAX];
+ u32 rxm_amsdu_16_or_more[MU_UL_MAX];
+ u32 frequency_offset[FREQ_OFFSET_TABLE_IDX_MAX];
+ u32 frequency_offset_idx;
+ u32 rts_bar_cnt[MU_UL_MAX];
+};
+
+enum trigger_flow_single_trigger_type {
+ TRIGGER_FLOW_BASIC_TRIGGER_TYPE,
+ TRIGGER_FLOW_BSRP_TYPE,
+ TRIGGER_FLOW_BFRP_TYPE,
+ TRIGGER_FLOW_MAX
+};
+
+struct cl_trigger_flow_statistics {
+ u32 type; /* This field should be first in the struct */
+ u32 single_trigger_sent[TRIGGER_FLOW_MAX][AC_MAX];
+ u32 htp_rx_failure[AC_MAX];
+ u32 trigger_based_mpdu[MU_UL_MAX];
+};
+
+#define DYN_CAL_DEBUG_NUM_ITER 3
+
+struct dyn_cal_debug_info_t {
+ u16 calib_num;
+ u8 curr_config;
+ union {
+ struct {
+ u8 iter_num;
+ u32 measured_val;
+ };
+ struct {
+ u8 min_config;
+ u32 dyn_cal_min_val;
+ u32 dyn_cal_max_val;
+ u8 max_config;
+ };
+ };
+
+ u8 new_config;
+};
+
+struct cl_dyn_calib_statistics {
+ u32 type; /* This field should be first in the struct */
+ u8 is_multi_client_mode;
+ u8 default_dyn_cal_val;
+ u8 dyn_cal_debug_info_ix;
+ u16 dyn_cal_process_cnt;
+ u16 mac_phy_sync_err_cnt;
+ struct dyn_cal_debug_info_t dyn_cal_debug_info[DYN_CAL_DEBUG_NUM_ITER];
+};
+
+/* End of parameters that require host changes */
+
+/* Structure containing the parameters for assert prints DBG_PRINT_IND message. */
+struct dbg_print_ind {
+ __le16 file_id;
+ __le16 line;
+ __le16 has_param;
+ u8 err_no_dump;
+ u8 reserved;
+ __le32 param;
+};
+
+/* 4 ACs + BCN + HTP + current THD */
+#define MACHW_THD_REGS_CNT (IPC_TX_QUEUE_CNT + 2)
+
+/* Enumeration of MAC-HW registers (debug dump at recovery event) */
+enum {
+ HAL_MACHW_AGGR_STATUS,
+ HAL_MACHW_DEBUG_HWSM_1,
+ HAL_MACHW_DEBUG_HWSM_2,
+ HAL_MACHW_DEBUG_HWSM_3,
+ HAL_MACHW_DMA_STATUS_1,
+ HAL_MACHW_DMA_STATUS_2,
+ HAL_MACHW_DMA_STATUS_3,
+ HAL_MACHW_DMA_STATUS_4,
+ HAL_MACHW_RX_HEADER_H_PTR,
+ HAL_MACHW_RX_PAYLOAD_H_PTR,
+ HAL_MACHW_DEBUG_BCN_S_PTR,
+ HAL_MACHW_DEBUG_AC0_S_PTR,
+ HAL_MACHW_DEBUG_AC1_S_PTR,
+ HAL_MACHW_DEBUG_AC2_S_PTR,
+ HAL_MACHW_DEBUG_AC3_S_PTR,
+ HAL_MACHW_DEBUG_HTP_S_PTR,
+ HAL_MACHW_DEBUG_TX_C_PTR,
+ HAL_MACHW_DEBUG_RX_HDR_C_PTR,
+ HAL_MACHW_DEBUG_RX_PAY_C_PTR,
+ HAL_MACHW_MU0_TX_POWER_LEVEL_DELTA_1,
+ HAL_MACHW_MU0_TX_POWER_LEVEL_DELTA_2,
+ HAL_MACHW_POWER_BW_CALIB_FACTOR,
+ HAL_MACHW_TX_POWER_ANTENNA_FACTOR_1_ADDR,
+ HAL_MACHW_TX_POWER_ANTENNA_FACTOR_2_ADDR,
+ /* Keep this entry last */
+ HAL_MACHW_REG_NUM
+};
+
+#define HAL_MACHW_FSM_REG_NUM ((HAL_MACHW_DEBUG_HWSM_3 - HAL_MACHW_AGGR_STATUS) + 1)
+
+enum {
+ MPU_COMMON_FORMAT,
+ MPU_COMMON_FIELD_CTRL,
+ MPU_COMMON_LEGACY_INFO,
+ MPU_COMMON_COMMON_CFG_1,
+ MPU_COMMON_COMMON_CFG_2,
+ MPU_COMMON_COMMON_CFG_3,
+ MPU_COMMON_HE_CFG_1,
+ MPU_COMMON_HE_CFG_2,
+ MPU_COMMON_INT_STAT_RAW,
+ RIU_CCAGENSTAT,
+ PHY_HW_DBG_REGS_CNT
+};
+
+/* Error trace CE_AC info */
+struct dbg_ac_info {
+ u8 chk_state;
+ u8 tx_path_state;
+ u8 physical_queue_idx;
+ u16 active_session;
+ u32 last_frame_exch_ptr;
+};
+
+/* Error trace txdesc lists info */
+struct dbg_txlist_info {
+ u8 curr_session_idx;
+ u8 next_session_idx;
+ u16 pending_cnt;
+ u16 download_cnt;
+ u16 transmit_cnt;
+ u16 wait_for_ba_cnt;
+ u16 next_pending_cnt;
+};
+
+/* Txl chain info */
+struct dbg_txl_chain_info {
+ u32 count;
+ u32 frm_type;
+ u32 first_thd_ptr;
+ u32 last_thd_ptr;
+ u32 prev_thd_ptr;
+ u32 req_phy_flags;
+ u8 reqbw;
+ u8 ce_txq_idx;
+ u16 mpdu_count;
+ u8 chbw;
+ u32 rate_ctrl_info;
+ u32 rate_ctrl_info_he;
+ u32 txstatus;
+ u32 length;
+ u32 tx_time;
+};
+
+struct dbg_txl_ac_chain_trace {
+ struct dbg_txl_chain_info data[DBG_TXL_FRAME_EXCH_TRACE_DEPTH];
+ u32 count;
+ u8 next_chain_index;
+ u8 next_done_index;
+ u8 delta;
+};
+
+struct dbg_fw_trace {
+ u32 string_ptr;
+ u32 var_1;
+ u32 var_2;
+ u32 var_3;
+ u32 var_4;
+ u32 var_5;
+ u32 var_6;
+ /*
+ * This field is used only for driver pring dump file.
+ * collect string char is done at error dump collect data function.
+ */
+ char string_char[DBG_FW_TRACE_STR_MAX];
+};
+
+/* Error trace MAC-FW info */
+struct dbg_fw_info {
+ struct dbg_ac_info ac_info[CE_AC_MAX];
+ struct dbg_txlist_info txlist_info_singles[IPC_TX_QUEUE_CNT];
+ struct dbg_txlist_info txlist_info_agg[AGG_IDX_MAX];
+ struct dbg_txl_ac_chain_trace txl_ac_chain_trace[CE_AC_MAX];
+ struct dbg_fw_trace fw_trace[DBG_FW_TRACE_SIZE];
+ u32 fw_trace_idx;
+};
+
+/* TXM regs */
+struct dbg_txm_regs {
+ u8 hw_state;
+ u8 fw_state;
+ u8 spx_state;
+ u8 free_buf_state;
+ u8 mpdu_cnt;
+ u8 lli_cnt;
+ u8 lli_done_reason;
+ u8 lli_done_mpdu_num;
+ u16 active_bytes;
+ u16 prefetch_bytes;
+ u32 last_thd_done_addr;
+ u16 last_thd_done_mpdu_num;
+ u16 underrun_cnt;
+};
+
+/* Error trace HW registers */
+struct dbg_hw_reg_info {
+ u32 mac_hw_reg[HAL_MACHW_REG_NUM];
+ u32 mac_hw_sec_fsm[CL_MU_IDX_MAX][HAL_MACHW_FSM_REG_NUM];
+ u32 phy_mpu_hw_reg[PHY_HW_DBG_REGS_CNT];
+ struct dbg_txm_regs txm_regs[AGG_IDX_MAX];
+};
+
+struct dbg_meta_data {
+ __le32 lmac_req_buf_size;
+ u8 physical_queue_cnt;
+ u8 agg_index_max;
+ u8 ce_ac_max;
+ u8 mu_user_max;
+ u8 txl_exch_trace_depth;
+ __le16 mac_hw_regs_max;
+ __le16 phy_hw_regs_max;
+ __le16 thd_chains_data_size;
+ u8 chains_info_elem_cnt;
+};
+
+struct dbg_agg_thds_addr {
+ u32 rts_cts_thd_addr;
+ u32 athd_addr;
+ u32 tf_thd_addr;
+ u32 bar_thd_addr;
+ u32 policy_table_addr;
+};
+
+struct dbg_agg_thd_info {
+ struct tx_hd rts_cts_thd;
+ struct tx_hd athd;
+ struct tx_hd tf_thd;
+ struct tx_hd bar_thd;
+ struct tx_policy_tbl policy_table;
+};
+
+struct dbg_machw_regs_thd_info {
+ struct tx_hd thd[MACHW_THD_REGS_CNT];
+};
+
+struct dbg_thd_chains_info {
+ u8 type_array[DBG_CHAINS_INFO_ELEM_CNT];
+ u32 elem_address[DBG_CHAINS_INFO_ELEM_CNT];
+};
+
+struct dbg_thd_chains_data {
+ u8 data[DBG_THD_CHAINS_INFO_ARRAY_SIZE];
+};
+
+/* Error trace debug structure. common to fw & drv */
+struct dbg_error_trace_info_common {
+ struct dbg_print_ind error_info;
+ struct dbg_meta_data dbg_metadata;
+ struct dbg_hw_reg_info hw_info;
+ struct dbg_fw_info fw_info;
+ struct dbg_agg_thds_addr agg_thds_addr[AGG_IDX_MAX];
+};
+
+/* Dbg error info driver side */
+struct dbg_error_trace_info_drv {
+ struct dbg_error_trace_info_common common_info;
+ struct dbg_agg_thd_info agg_thd_info[AGG_IDX_MAX];
+ struct dbg_machw_regs_thd_info machw_thd_info;
+ struct dbg_thd_chains_info thd_chains_info[CE_AC_MAX];
+ struct dbg_thd_chains_data thd_chains_data[CE_AC_MAX];
+};
+
+/*
+ * This is the main debug struct, the kernel allocate the needed spaces using kmalloc().
+ * the firmware holds a pointer to this struct.
+ */
+struct dbg_dump_info {
+ u32 dbg_info; /* Should be first member in the struct */
+ /* Dump data transferred to host */
+ struct dbg_debug_info_tag general_data;
+ struct dbg_error_trace_info_drv fw_dump;
+ u8 la_mem[LA_CNT][LA_MEM_LEN];
+};
+
+struct dbg_info {
+ union {
+ u32 type;
+ struct cl_txl_statistics tx_stats;
+ struct cl_bcn_statistics bcn_stats;
+ struct cl_rxl_statistics rx_stats;
+ struct cl_dyn_calib_statistics dyn_calib_stats;
+ struct cl_rate_drop_statistics rate_drop_stats;
+ struct cl_bf_statistics bf_stats;
+ struct cl_trigger_flow_statistics trigger_flow_stats;
+ struct cl_queue_idx_dif_stats txdesc_idx_diff_stats;
+ struct dbg_dump_info dump;
+ } u;
+};
+
+/* Structure of a list element header */
+struct co_list_hdr {
+ __le32 next;
+};
+
+/* ETH2WLAN and NATT common parameters field (e2w_natt_param) struct definition: */
+struct cl_e2w_natt_param {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u32 valid : 1, /* [0] */
+ ampdu : 1, /* [1] */
+ last_mpdu : 1, /* [2] */
+ lc_snap : 1, /* [3] */
+ vlan : 1, /* [4] */
+ amsdu : 1, /* [5] */
+ e2w_band_id : 1, /* [6] */
+ use_local_addr : 1, /* [7] */
+ hdr_conv_enable : 1, /* [8] */
+ sta_index : 7, /* [9:15] */
+ packet_length : 15, /* [30:16] */
+ e2w_int_enable : 1; /* [31] */
+#else /* __BIG_ENDIAN_BITFIELD */
+ u32 e2w_int_enable : 1, /* [0] */
+ packet_length : 15, /* [15:1] */
+ sta_index : 7, /* [22:16] */
+ hdr_conv_enable : 1, /* [23] */
+ use_local_addr : 1, /* [24] */
+ e2w_band_id : 1, /* [25] */
+ amsdu : 1, /* [26] */
+ vlan : 1, /* [27] */
+ llc_snap : 1, /* [28] */
+ last_mpdu : 1, /* [29] */
+ ampdu : 1, /* [30] */
+ valid : 1; /* [31] */
+#endif
+};
+
+#define CL_CCMP_GCMP_PN_SIZE 6
+
+struct cl_e2w_txhdr_param {
+ __le16 frame_ctrl;
+ __le16 seq_ctrl;
+ __le32 ht_ctrl;
+ u8 encrypt_pn[CL_CCMP_GCMP_PN_SIZE];
+ __le16 qos_ctrl;
+};
+
+/* Bit 16 Has different usage for single (valid sta - set by host) or agg (tx done - set by HW) */
+struct cl_e2w_result {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u32 backup_bcn : 1, /* [0] */
+ dont_chain : 1, /* [1] */
+ is_flush_needed : 1, /* [2] */
+ is_internal : 1, /* [3] */
+ which_descriptor : 2, /* [5:4] */
+ is_first_in_AMPDU : 1, /* [6] */
+ is_ext_buff : 1, /* [7] */
+ is_txinject : 1, /* [8] */
+ is_vns : 1, /* [9] */
+ single_type : 2, /* [11:10] */
+ tid : 4, /* [15:12] */
+ single_valid_sta__agg_e2w_tx_done : 1, /* [16] */
+ msdu_length : 13, /* [29:17] */
+ bcmc : 1, /* [30] */
+ sw_padding : 1; /* [31] */
+#else /* __BIG_ENDIAN_BITFIELD */
+ u32 sw_padding : 1, /* [0] */
+ bcmc : 1, /* [1] */
+ msdu_length : 13, /* [14:2] */
+ single_valid_sta__agg_e2w_tx_done : 1, /* [15] */
+ tid : 4, /* [19:16] */
+ single_type : 2, /* [21:20] */
+ is_vns : 1, /* [22] */
+ is_txinject : 1, /* [23] */
+ is_ext_buff : 1, /* [24] */
+ is_first_in_AMPDU : 1, /* [25] */
+ which_descriptor : 2, /* [27:26] */
+ is_internal : 1, /* [28] */
+ is_flush_needed : 1, /* [29] */
+ dont_chain : 1, /* [30] */
+ backup_bcn : 1; /* [31] */
+#endif
+};
+
+struct tx_host_info {
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u32 packet_cnt : 4, /* [3:0] */
+ host_padding : 8, /* [11:4] */
+ last_MSDU_LLI_INT_enable : 1, /* [12] */
+ is_eth_802_3 : 1, /* [13] */
+ is_protected : 1, /* [14] */
+ vif_index : 4, /* [18:15] */
+ rate_ctrl_entry : 3, /* [21:19] */
+ expected_ack : 1, /* [22] */
+ is_bcn : 1, /* [23] */
+ hw_key_idx : 8; /* [31:24] */
+#else /* __BIG_ENDIAN_BITFIELD */
+ u32 hw_key_idx : 8, /* [7:0] */
+ is_bcn : 1, /* [8] */
+ expected_ack : 1, /* [9] */
+ rate_ctrl_entry : 3, /* [12:10] */
+ vif_index : 4, /* [16:13] */
+ is_protected : 1, /* [17] */
+ is_eth_802_3 : 1, /* [18] */
+ last_MSDU_LLI_INT_enable : 1, /* [19] */
+ host_padding : 8, /* [27:20] */
+ packet_cnt : 4; /* [31:28] */
+#endif
+};
+
+struct lmacapi {
+ __le32 packet_addr[CL_AMSDU_TX_PAYLOAD_MAX];
+ __le16 packet_len[CL_AMSDU_TX_PAYLOAD_MAX];
+ __le16 push_timestamp; /* Milisec */
+};
+
+struct txdesc {
+ /* Pointer to the next element in the queue */
+ struct co_list_hdr list_hdr;
+ /* E2w txhdr parameters */
+ struct cl_e2w_txhdr_param e2w_txhdr_param __aligned(4);
+ /* Celeno flags field */
+ struct tx_host_info host_info __aligned(4);
+ /* Common parameters for ETH2WLAN and NATT hardware modules */
+ struct cl_e2w_natt_param e2w_natt_param;
+ /* ETH2WLAN status and NATT calculation results */
+ struct cl_e2w_result e2w_result;
+ /* Information provided by UMAC to LMAC */
+ struct lmacapi umacdesc;
+};
+
+/*
+ * Comes from ipc_dma.h
+ * Element in the pool of TX DMA bridge descriptors.
+ * PAY ATTENTION - Avoid Changing/adding pointers to that struct,
+ * or any shared-memory-related-structs !!!
+ * Since in 64Bit platforms (Where pointers are 64Bit) such pointers
+ * might change alignments in shared-memory-related-structs of FW and DRV.
+ */
+struct dma_desc {
+ /*
+ * Application subsystem address which is used as source address for DMA payload
+ * transfer
+ */
+ u32 src;
+ /*
+ * Points to the start of the embedded data buffer associated with this descriptor.
+ * This address acts as the destination address for the DMA payload transfer
+ */
+ u32 dest;
+ /* Complete length of the buffer in memory */
+ u16 length;
+ /* Control word for the DMA engine (e.g. for interrupt generation) */
+ u16 ctrl;
+ /* Pointer to the next element of the chained list */
+ u32 next;
+ /*
+ * When working with 64bit application the high 32bit address should be provided
+ * in the following variable (note: "PCIEW_CONF" register should be configured accordingly)
+ */
+ u32 app_hi_32bit;
+};
+
+/* Message structure for CFMs from Emb to App */
+struct cl_ipc_cfm_msg {
+ __le32 status;
+ __le32 dma_addr;
+ __le32 single_queue_idx;
+};
+
+/* Message structure for Debug messages from Emb to App */
+struct cl_ipc_dbg_msg {
+ char string[IPC_DBG_PARAM_SIZE];
+ __le32 pattern;
+};
+
+/*
+ * Message structure for MSGs from App to Emb.
+ * Actually a sub-structure will be used when filling the messages.
+ */
+struct cl_ipc_a2e_msg {
+ u32 dummy_word;
+ u32 msg[IPC_A2E_MSG_BUF_SIZE];
+};
+
+/* Struct for tensilica backtrace */
+struct cl_ipc_backtrace_struct {
+ u32 pc[IPC_BACKTRACT_DEPTH];
+};
+
+/* Struct for tensilica exception indication */
+struct cl_ipc_exception_struct {
+ u32 pattern;
+ u32 type;
+ u32 epc;
+ u32 excsave;
+ struct cl_ipc_backtrace_struct backtrace;
+};
+
+/*
+ * Can't use BITS_TO_LONGS since in firmware sizeof(long) == 4 and in the host
+ * this might be different from compiler to compiler. We need our own macro to
+ * match the firmware definition.
+ */
+#define CL_BITS_TO_U32S(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u32))
+
+/*
+ * struct cl_ipc_enhanced_tim - ipc enhanced tim element
+ *
+ * This structure hold indication on the buffered traffic resembles the TIM element.
+ * This enhanced TIM holds more info on the buffered traffic, it indicate whether the
+ * traffic is associated with BA or singles and on which AC's.
+ *
+ * @tx_agg: indicate buffered tx-aggregated traffic per AC per BA session index
+ * @tx_single: indicate buffered tx-singles traffic per AC per station index
+ * @rx: indicate buffered rx traffic per AC per station index
+ */
+struct cl_ipc_enhanced_tim {
+ /*
+ * Traffic indication map
+ * our driver push packets to the IPC queues (DRIVER_LAYER -> IPC_LAYER),
+ * on each push it also notify the IPC_LAYER for which queue it pushed packets.
+ * this indication done by filling the bitmap.
+ *
+ * this is enhanced tim element because it is divided into AC and packet type
+ * (aggregatable/non aggregatable).
+ * the regular tim element which exist in the beacon is divided by AID only
+ * which is less informative.
+ *
+ * TODO: add TIM element maintenance in the FW, this can be achived by the
+ * enhanced tim elements abstraction.
+ */
+ u32 tx_rx_agg[AC_MAX][CL_BITS_TO_U32S(IPC_TIM_AGG_SIZE)];
+ u32 tx_single[AC_MAX][CL_BITS_TO_U32S(FW_MAX_NUM_STA)];
+};
+
+struct cl_ipc_shared_env {
+ volatile struct cl_ipc_a2e_msg a2e_msg_buf;
+ /* Fields for MSGs sending from Emb to App */
+ volatile struct cl_ipc_e2a_msg e2a_msg_buf;
+ volatile struct dma_desc msg_dma_desc;
+ volatile __le32 e2a_msg_hostbuf_addr[IPC_E2A_MSG_BUF_CNT];
+ /* Fields for Debug MSGs sending from Emb to App */
+ volatile struct cl_ipc_dbg_msg dbg_buf;
+ volatile struct dma_desc dbg_dma_desc;
+ volatile __le32 dbg_hostbuf_addr[IPC_DBG_BUF_CNT];
+ volatile __le32 dbginfo_addr;
+ volatile __le32 dbginfo_size;
+ volatile __le32 pattern_addr;
+ volatile __le32 radarbuf_hostbuf[IPC_RADAR_BUF_CNT]; /* Buffers for radar events */
+ /* Used to update host general debug data */
+ volatile struct dma_desc dbg_info_dma_desc;
+ /*
+ * The following members are associated ith the process of fetching
+ * "application txdesc" from the application and copy them to the
+ * internal embedded queues.
+ *
+ * @host_address_dma: dedicated dma descriptor to fetch the addresses of
+ * "application txdesc" queues
+ * @write_dma_desc_pool: dedicated dma descriptor to fetch the "@txdesc_emb_wr_idx"
+ * index (dma for application txdesc metadata)
+ * @last_txdesc_dma_desc_pool: dedicated dma descriptor to fetch "application txdescs"
+ * (dma for application txdesc)
+ * @txdesc_emb_wr_idx: indicate the last valid "application txdesc" fetched
+ */
+ volatile struct dma_desc host_address_dma;
+ volatile struct dma_desc tx_power_tables_dma_desc;
+ volatile __le32 txdesc_emb_wr_idx[IPC_TX_QUEUE_CNT + CL_MAX_BA_PHYSICAL_QUEUE_CNT];
+ volatile __le32 host_rxbuf_rd_idx[CL_RX_BUF_MAX]; /* For FW only */
+ volatile struct dma_desc rx_fw_hb_pool_dma_desc[HB_POOL_DMA_DESCS_NUM]; /* For FW only */
+ volatile struct dma_desc rxm_hb_pool_dma_desc[HB_POOL_DMA_DESCS_NUM]; /* For FW only */
+ volatile __le32 cfm_read_pointer; /* CFM read point. Updated by Host, Read by FW */
+ volatile __le16 phy_dev;
+ volatile u8 la_enable;
+ volatile u8 flags;
+ volatile u8 first_tcv;
+ volatile u8 ant_num;
+ volatile __le16 max_retry;
+ volatile __le16 lft_limit_ms;
+ volatile __le16 bar_max_retry; /* Not used by driver */
+ volatile __le32 assert_pattern;
+ volatile __le16 assert_file_id;
+ volatile __le16 assert_line_num;
+ volatile __le32 assert_param;
+ volatile struct cl_ipc_enhanced_tim enhanced_tim;
+ volatile u8 la_mirror_enable;
+};
+
+/* IRQs from app to emb */
+#define IPC_IRQ_A2E_TXDESC 0xFF00
+#define IPC_IRQ_A2E_RXBUF_BACK BIT(2)
+#define IPC_IRQ_A2E_MSG BIT(1)
+#define IPC_IRQ_A2E_RXREQ 0x78
+#define IPC_IRQ_A2E_ALL (IPC_IRQ_A2E_TXDESC | IPC_IRQ_A2E_MSG)
+
+#define IPC_IRQ_A2E_TXDESC_FIRSTBIT 8
+#define IPC_IRQ_A2E_RXREQ_FIRSTBIT 3
+
+#define IPC_IRQ_A2E_TXDESC_AGG_MAP(ac) (IPC_IRQ_A2E_TXDESC_FIRSTBIT + IPC_TXQ_CNT + (ac))
+#define IPC_IRQ_A2E_TXDESC_SINGLE_MAP(ac) (IPC_IRQ_A2E_TXDESC_FIRSTBIT + (ac))
+#define IPC_IRQ_A2E_RX_STA_MAP(ac) (IPC_IRQ_A2E_RXREQ_FIRSTBIT + (ac))
+
+struct cl_ipc_e2a_irq {
+ u32 dbg;
+ u32 msg;
+ u32 rxdesc;
+ u32 txcfm;
+ u32 radar;
+ u32 txdesc_ind;
+ u32 tbtt;
+ u32 sync;
+ u32 all;
+};
+
+/*
+ * IRQs from emb to app
+ * This interrupt is used by the firmware to indicate the driver that it may proceed.
+ * The corresponding interrupt handler sets the CL_DEV_FW_SYNC flag in cl_hw->drv_flags.
+ * There is also the cl_hw->fw_sync_wq wait queue, on which we may sleep while waiting for
+ * the interrupt, if we are allowed to do so (e.g., when we are in a system call).
+ */
+#define IPC_IRQ_L2H_DBG BIT(0)
+#define IPC_IRQ_L2H_MSG BIT(1)
+#define IPC_IRQ_L2H_RXDESC BIT(2)
+#define IPC_IRQ_L2H_TXCFM 0x000000F8
+#define IPC_IRQ_L2H_RADAR BIT(8)
+#define IPC_IRQ_L2H_TXDESC_IND BIT(9)
+#define IPC_IRQ_L2H_TBTT BIT(10)
+#define IPC_IRQ_L2H_SYNC BIT(11)
+
+#define IPC_IRQ_L2H_ALL \
+ (IPC_IRQ_L2H_TXCFM | \
+ IPC_IRQ_L2H_RXDESC | \
+ IPC_IRQ_L2H_MSG | \
+ IPC_IRQ_L2H_DBG | \
+ IPC_IRQ_L2H_RADAR | \
+ IPC_IRQ_L2H_TXDESC_IND | \
+ IPC_IRQ_L2H_TBTT | \
+ IPC_IRQ_L2H_SYNC)
+
+#define IPC_IRQ_S2H_DBG BIT(12)
+#define IPC_IRQ_S2H_MSG BIT(13)
+#define IPC_IRQ_S2H_RXDESC BIT(14)
+#define IPC_IRQ_S2H_TXCFM 0x000F8000
+#define IPC_IRQ_S2H_RADAR BIT(20)
+#define IPC_IRQ_S2H_TXDESC_IND BIT(21)
+#define IPC_IRQ_S2H_TBTT BIT(22)
+#define IPC_IRQ_S2H_SYNC BIT(23)
+
+#define IPC_IRQ_S2H_ALL \
+ (IPC_IRQ_S2H_TXCFM | \
+ IPC_IRQ_S2H_RXDESC | \
+ IPC_IRQ_S2H_MSG | \
+ IPC_IRQ_S2H_DBG | \
+ IPC_IRQ_S2H_RADAR | \
+ IPC_IRQ_S2H_TXDESC_IND | \
+ IPC_IRQ_S2H_TBTT | \
+ IPC_IRQ_S2H_SYNC)
+
+#endif /* CL_IPC_SHARED_H */
--
2.36.1


2022-05-25 08:07:50

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 09/96] cl8k: add calib.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/calib.c | 2266 ++++++++++++++++++++++
1 file changed, 2266 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/calib.c

diff --git a/drivers/net/wireless/celeno/cl8k/calib.c b/drivers/net/wireless/celeno/cl8k/calib.c
new file mode 100644
index 000000000000..93fe6a2e00ee
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/calib.c
@@ -0,0 +1,2266 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/dcache.h>
+
+#include "reg/reg_defs.h"
+#include "chip.h"
+#include "config.h"
+#include "radio.h"
+#include "e2p.h"
+#include "rfic.h"
+#include "debug.h"
+#include "utils.h"
+#include "calib.h"
+
+static void cl_calib_common_init_cfm(struct cl_iq_dcoc_data *iq_dcoc_data)
+{
+ int i;
+
+ for (i = 0; i < CALIB_CFM_MAX; i++)
+ iq_dcoc_data->dcoc_iq_cfm[i].status = CALIB_FAIL;
+}
+
+void cl_calib_common_fill_phy_data(struct cl_hw *cl_hw, struct cl_iq_dcoc_info *iq_dcoc_db,
+ u8 flags)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ u8 bw = cl_hw->bw;
+ u8 channel_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, cl_hw->channel, bw);
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+
+ if (flags & SET_PHY_DATA_FLAGS_DCOC)
+ cl_calib_dcoc_fill_data(cl_hw, iq_dcoc_db);
+
+ if (flags & SET_PHY_DATA_FLAGS_IQ_TX_LOLC)
+ cl_calib_iq_lolc_fill_data(cl_hw, iq_dcoc_db->iq_tx_lolc);
+
+ if (flags & SET_PHY_DATA_FLAGS_IQ_TX)
+ cl_calib_iq_fill_data(cl_hw, iq_dcoc_db->iq_tx,
+ chip->calib_db.iq_tx[tcv_idx][channel_idx][bw][sx]);
+
+ if (flags & SET_PHY_DATA_FLAGS_IQ_RX)
+ cl_calib_iq_fill_data(cl_hw, iq_dcoc_db->iq_rx,
+ chip->calib_db.iq_rx[tcv_idx][channel_idx][bw][sx]);
+}
+
+int cl_calib_common_tables_alloc(struct cl_hw *cl_hw)
+{
+ struct cl_iq_dcoc_data *buf = NULL;
+ u32 len = sizeof(struct cl_iq_dcoc_data);
+ dma_addr_t phys_dma_addr;
+
+ buf = dma_alloc_coherent(cl_hw->chip->dev, len, &phys_dma_addr, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ cl_hw->iq_dcoc_data_info.iq_dcoc_data = buf;
+ cl_hw->iq_dcoc_data_info.dma_addr = phys_dma_addr;
+
+ cl_calib_common_init_cfm(cl_hw->iq_dcoc_data_info.iq_dcoc_data);
+ return 0;
+}
+
+void cl_calib_common_tables_free(struct cl_hw *cl_hw)
+{
+ struct cl_iq_dcoc_data_info *iq_dcoc_data_info = &cl_hw->iq_dcoc_data_info;
+ u32 len = sizeof(struct cl_iq_dcoc_data);
+ dma_addr_t phys_dma_addr = iq_dcoc_data_info->dma_addr;
+
+ if (!iq_dcoc_data_info->iq_dcoc_data)
+ return;
+
+ dma_free_coherent(cl_hw->chip->dev, len, (void *)iq_dcoc_data_info->iq_dcoc_data,
+ phys_dma_addr);
+ iq_dcoc_data_info->iq_dcoc_data = NULL;
+}
+
+static void _cl_calib_common_start_work(struct work_struct *ws)
+{
+ struct cl_calib_work *calib_work = container_of(ws, struct cl_calib_work, ws);
+ struct cl_hw *cl_hw = calib_work->cl_hw;
+ struct cl_hw *cl_hw_other = cl_hw_other_tcv(cl_hw);
+ struct cl_chip *chip = cl_hw->chip;
+
+ cl_calib_iq_init_calibration(cl_hw);
+
+ if (cl_chip_is_both_enabled(chip))
+ cl_calib_iq_init_calibration(cl_hw_other);
+
+ /* Start cl_radio_on after calibration ends */
+ cl_radio_on_start(cl_hw);
+
+ if (cl_chip_is_both_enabled(chip))
+ cl_radio_on_start(cl_hw_other);
+
+ kfree(calib_work);
+}
+
+void cl_calib_common_start_work(struct cl_hw *cl_hw)
+{
+ struct cl_calib_work *calib_work = kzalloc(sizeof(*calib_work), GFP_ATOMIC);
+
+ if (!calib_work)
+ return;
+
+ calib_work->cl_hw = cl_hw;
+ INIT_WORK(&calib_work->ws, _cl_calib_common_start_work);
+ queue_work(cl_hw->drv_workqueue, &calib_work->ws);
+}
+
+s16 cl_calib_common_get_temperature(struct cl_hw *cl_hw, u8 cfm_type)
+{
+ struct calib_cfm *dcoc_iq_cfm =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->dcoc_iq_cfm[cfm_type];
+ u16 raw_bits = (le16_to_cpu(dcoc_iq_cfm->raw_bits_data_0) +
+ le16_to_cpu(dcoc_iq_cfm->raw_bits_data_1)) / 2;
+
+ return cl_temperature_calib_calc(cl_hw, raw_bits);
+}
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+static u16 cl_calib_common_eeprom_get_idx(struct cl_hw *cl_hw, int bw_idx, u16 channel,
+ u16 channels_plan[], u8 num_of_channels)
+{
+ int i;
+
+ for (i = 0; i < num_of_channels; i++)
+ if (channels_plan[i] == channel)
+ return i;
+
+ return U16_MAX;
+}
+
+static u16 cl_calib_common_eeprom_get_addr(struct cl_hw *cl_hw, int bw_idx, u16 channel)
+{
+ int idx = 0;
+ u16 addr = 0;
+ u16 *channels;
+ u8 num_of_channels;
+
+ switch (bw_idx) {
+ case CHNL_BW_20:
+ channels = cl_hw->conf->ci_calib_eeprom_channels_20mhz;
+
+ if (cl_hw_is_tcv0(cl_hw)) {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV0;
+ } else {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_20MHZ_TCV1;
+ }
+ break;
+ case CHNL_BW_40:
+ channels = cl_hw->conf->ci_calib_eeprom_channels_40mhz;
+
+ if (cl_hw_is_tcv0(cl_hw)) {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV0;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV0;
+ } else {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_40MHZ_TCV1;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_40MHZ_TCV1;
+ }
+ break;
+ case CHNL_BW_80:
+ channels = cl_hw->conf->ci_calib_eeprom_channels_80mhz;
+
+ if (cl_hw_is_tcv0(cl_hw)) {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV0;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV0;
+ } else {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV1;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV1;
+ }
+ break;
+ case CHNL_BW_160:
+ channels = cl_hw->conf->ci_calib_eeprom_channels_80mhz;
+
+ if (cl_hw_is_tcv0(cl_hw)) {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV0;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV0;
+ } else {
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_80MHZ_TCV1;
+ idx = cl_calib_common_eeprom_get_idx(cl_hw, bw_idx, channel, channels,
+ num_of_channels);
+ addr = ADDR_CALIB_IQ_DCOC_DATA_80MHZ_TCV1;
+ }
+ break;
+ default:
+ return U16_MAX;
+ }
+
+ if (idx == U16_MAX)
+ return U16_MAX;
+
+ return ((u16)addr + (u16)(idx * sizeof(struct eeprom_calib_data)));
+}
+
+static void cl_calib_common_write_lolc_to_eeprom(struct cl_calib_db *calib_db,
+ struct eeprom_calib_data *calib_data,
+ u8 ch_idx, u8 bw, u8 sx_idx, u8 tcv_idx)
+{
+ memcpy(calib_data->lolc,
+ calib_db->iq_tx_lolc[tcv_idx][ch_idx][bw][sx_idx],
+ sizeof(u32) * MAX_ANTENNAS);
+}
+
+static void cl_calib_common_write_dcoc_to_eeprom(struct cl_calib_db *calib_db,
+ struct eeprom_calib_data *calib_data,
+ u8 ch_idx, u8 bw, u8 sx_idx, u8 tcv_idx)
+{
+ memcpy(calib_data->dcoc,
+ calib_db->dcoc[tcv_idx][ch_idx][bw][sx_idx],
+ sizeof(struct cl_dcoc_calib) * MAX_ANTENNAS * DCOC_LNA_GAIN_NUM);
+}
+
+static void cl_calib_common_write_iq_to_eeprom(struct cl_calib_db *calib_db,
+ struct eeprom_calib_data *calib_data,
+ u8 ch_idx, u8 bw, u8 sx_idx, u8 tcv_idx)
+{
+ memcpy(calib_data->iq_tx,
+ calib_db->iq_tx[tcv_idx][ch_idx][bw][sx_idx],
+ sizeof(struct cl_iq_calib) * MAX_ANTENNAS);
+ memcpy(calib_data->iq_rx,
+ calib_db->iq_rx[tcv_idx][ch_idx][bw][sx_idx],
+ sizeof(struct cl_iq_calib) * MAX_ANTENNAS);
+}
+
+static s8 cl_calib_common_find_worst_iq_tone(struct cl_iq_report iq_report_dma)
+{
+ u8 tone = 0;
+ s8 worst_tone = S8_MIN;
+
+ for (tone = 0; tone < IQ_NUM_TONES_CFM; tone++)
+ if (worst_tone < iq_report_dma.ir_db[IQ_POST_IDX][tone])
+ worst_tone = iq_report_dma.ir_db[IQ_POST_IDX][tone];
+
+ return worst_tone;
+}
+
+static void cl_calib_common_write_score_dcoc(struct cl_hw *cl_hw,
+ struct eeprom_calib_data *calib_data)
+{
+ u8 lna, ant;
+
+ for (lna = 0; lna < DCOC_LNA_GAIN_NUM; lna++) {
+ for (ant = 0; ant < MAX_ANTENNAS; ant++) {
+ struct cl_dcoc_report *dcoc_calib_report =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->report.dcoc[lna][ant];
+
+ calib_data->score[ant].dcoc_i_mv[lna] =
+ (s16)le16_to_cpu(dcoc_calib_report->i_dc);
+ calib_data->score[ant].dcoc_q_mv[lna] =
+ (s16)le16_to_cpu(dcoc_calib_report->q_dc);
+ }
+ }
+}
+
+static void cl_calib_common_write_score_lolc(struct cl_hw *cl_hw,
+ struct eeprom_calib_data *calib_data)
+{
+ u8 ant;
+ struct cl_iq_dcoc_report *report = &cl_hw->iq_dcoc_data_info.iq_dcoc_data->report;
+
+ for (ant = 0; ant < MAX_ANTENNAS; ant++) {
+ calib_data->score[ant].lolc_score =
+ (s16)(le16_to_cpu(report->lolc_report[ant].lolc_qual)) >> 8;
+ }
+}
+
+static void cl_calib_common_write_score_iq(struct cl_hw *cl_hw,
+ struct eeprom_calib_data *calib_data)
+{
+ u8 ant;
+
+ for (ant = 0; ant < MAX_ANTENNAS; ant++) {
+ struct cl_iq_report iq_report_dma_tx =
+ cl_hw->iq_dcoc_data_info.iq_dcoc_data->report.iq_tx[ant];
+ struct cl_iq_report iq_report_dma_rx =
+ cl_hw->iq_dcoc_data_info.iq_dcoc_data->report.iq_rx[ant];
+
+ calib_data->score[ant].iq_tx_score = iq_report_dma_tx.ir_db_avg_post;
+ calib_data->score[ant].iq_rx_score = iq_report_dma_rx.ir_db_avg_post;
+ calib_data->score[ant].iq_tx_worst_score =
+ cl_calib_common_find_worst_iq_tone(iq_report_dma_tx);
+ calib_data->score[ant].iq_rx_worst_score =
+ cl_calib_common_find_worst_iq_tone(iq_report_dma_rx);
+ }
+}
+
+static void cl_calib_common_write_score_to_eeprom(struct cl_hw *cl_hw,
+ struct eeprom_calib_data *calib_data)
+{
+ cl_calib_common_write_score_dcoc(cl_hw, calib_data);
+ cl_calib_common_write_score_lolc(cl_hw, calib_data);
+ cl_calib_common_write_score_iq(cl_hw, calib_data);
+}
+
+static void cl_calib_common_write_eeprom(struct cl_hw *cl_hw, u32 channel, u8 bw, u8 sx_idx,
+ u8 tcv_idx)
+{
+ u8 ch_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, channel, bw);
+ u16 eeprom_addr = cl_calib_common_eeprom_get_addr(cl_hw, bw, channel);
+ struct eeprom_calib_data calib_data;
+ struct cl_calib_db *calib_db = &cl_hw->chip->calib_db;
+
+ if (eeprom_addr == U16_MAX)
+ return;
+
+ calib_data.valid = true;
+ calib_data.temperature = cl_calib_common_get_temperature(cl_hw, CALIB_CFM_IQ);
+ cl_calib_common_write_lolc_to_eeprom(calib_db, &calib_data, ch_idx, bw, sx_idx, tcv_idx);
+ cl_calib_common_write_dcoc_to_eeprom(calib_db, &calib_data, ch_idx, bw, sx_idx, tcv_idx);
+ cl_calib_common_write_iq_to_eeprom(calib_db, &calib_data, ch_idx, bw, sx_idx, tcv_idx);
+ cl_calib_common_write_score_to_eeprom(cl_hw, &calib_data);
+
+ cl_e2p_write(cl_hw->chip, (u8 *)&calib_data, (u16)sizeof(struct eeprom_calib_data),
+ eeprom_addr);
+}
+
+static bool cl_calib_common_is_channel_included_in_eeprom_bitmap(struct cl_hw *cl_hw)
+{
+ u16 i;
+ u16 *eeprom_valid_ch = NULL;
+ u16 num_of_channels;
+
+ switch (cl_hw->bw) {
+ case CHNL_BW_20:
+ eeprom_valid_ch = cl_hw->conf->ci_calib_eeprom_channels_20mhz;
+
+ if (cl_hw_is_tcv0(cl_hw))
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0;
+ else
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1;
+ break;
+ case CHNL_BW_40:
+ eeprom_valid_ch = cl_hw->conf->ci_calib_eeprom_channels_40mhz;
+
+ if (cl_hw_is_tcv0(cl_hw))
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0;
+ else
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1;
+ break;
+ case CHNL_BW_80:
+ eeprom_valid_ch = cl_hw->conf->ci_calib_eeprom_channels_80mhz;
+
+ if (cl_hw_is_tcv0(cl_hw))
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0;
+ else
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1;
+ break;
+ case CHNL_BW_160:
+ eeprom_valid_ch = cl_hw->conf->ci_calib_eeprom_channels_160mhz;
+
+ if (cl_hw_is_tcv0(cl_hw))
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV0;
+ else
+ num_of_channels = EEPROM_CALIB_DATA_ELEM_NUM_20MHZ_TCV1;
+ break;
+ default:
+ return false;
+ }
+ for (i = 0; i < num_of_channels; i++)
+ if (cl_hw->channel == eeprom_valid_ch[i])
+ return true;
+
+ return false;
+}
+#endif /* CONFIG_CL8K_EEPROM_STM24256 */
+
+int cl_calib_common_handle_set_channel_cfm(struct cl_hw *cl_hw, struct cl_calib_params calib_params)
+{
+ struct cl_iq_dcoc_data *iq_dcoc_data = cl_hw->iq_dcoc_data_info.iq_dcoc_data;
+ u8 mode = calib_params.mode;
+
+ cl_dbg_trace(cl_hw, "\n ------ FINISH CALIB CHANNEL -----\n");
+
+ /*
+ * In case any of the requested calibrations failed - no need to copy
+ * the other Calibration data, and fail the whole calibration process.
+ */
+ if ((mode & SET_CHANNEL_MODE_CALIB_DCOC &&
+ iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_DCOC].status != CALIB_SUCCESS) ||
+ (mode & SET_CHANNEL_MODE_CALIB_IQ &&
+ iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_IQ].status != CALIB_SUCCESS)) {
+ cl_dbg_err(cl_hw, "Calibration failed! mode = %u, DCOC_CFM_STATUS = %u, "
+ "IQ_CFM_STATUS = %u\n", mode,
+ iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_DCOC].status,
+ iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_IQ].status);
+ /* Set status to CALIB_FAIL to ensure that FW is writing the values. */
+ iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_DCOC].status = CALIB_FAIL;
+ iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_IQ].status = CALIB_FAIL;
+ return -EINVAL;
+ }
+
+ cl_dbg_trace(cl_hw, "mode = %u\n", mode);
+
+ if (mode & SET_CHANNEL_MODE_CALIB_DCOC)
+ cl_calib_dcoc_handle_set_channel_cfm(cl_hw, calib_params.first_channel);
+
+ if (mode & SET_CHANNEL_MODE_CALIB_IQ)
+ cl_calib_iq_handle_set_channel_cfm(cl_hw, calib_params.plan_bitmap);
+
+ if (mode & SET_CHANNEL_MODE_CALIB_LOLC)
+ cl_calib_iq_lolc_handle_set_channel_cfm(cl_hw, calib_params.plan_bitmap);
+
+#ifdef CONFIG_CL8K_EEPROM_STM24256
+ if (cl_hw->chip->conf->ci_calib_eeprom_en && cl_hw->chip->conf->ce_production_mode &&
+ cl_hw->chip->is_calib_eeprom_loaded && cl_hw->chip->conf->ce_calib_runtime_en)
+ if (cl_calib_common_is_channel_included_in_eeprom_bitmap(cl_hw))
+ cl_calib_common_write_eeprom(cl_hw, cl_hw->channel, cl_hw->bw,
+ cl_hw->sx_idx, cl_hw->tcv_idx);
+#endif
+
+ return 0;
+}
+
+int cl_calib_common_check_errors(struct cl_hw *cl_hw)
+{
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u16 dcoc_erros = cl_hw->chip->calib_db.errors[tcv_idx].dcoc;
+ u16 lolc_erros = cl_hw->chip->calib_db.errors[tcv_idx].lolc;
+ u16 iq_tx_erros = cl_hw->chip->calib_db.errors[tcv_idx].iq_tx;
+ u16 iq_rx_erros = cl_hw->chip->calib_db.errors[tcv_idx].iq_rx;
+
+ if (!cl_hw->chip->conf->ci_calib_check_errors)
+ return 0;
+
+ if (dcoc_erros > 0 || lolc_erros > 0 || iq_tx_erros > 0 || iq_rx_erros > 0) {
+ CL_DBG_ERROR(cl_hw, "Abort: dcoc_erros %u, lolc_erros %u,"
+ " iq_tx_erros %u, iq_rx_erros %u\n",
+ dcoc_erros, lolc_erros, iq_tx_erros, iq_rx_erros);
+ return -ECANCELED;
+ }
+
+ return 0;
+}
+
+static const u8 calib_channels_24g[CALIB_CHAN_24G_MAX] = {
+ 1, 6, 11
+};
+
+static const u8 calib_channels_5g_plan[CALIB_CHAN_5G_PLAN] = {
+ 36, 52, 100, 116, 132, 149
+};
+
+static const u8 calib_channels_6g_plan[CALIB_CHAN_6G_PLAN] = {
+ 1, 17, 33, 49, 65, 81, 97, 113, 129, 145, 161, 177, 193, 209, 225
+};
+
+static const u8 calib_channels_5g_bw_20[] = {
+ 36, 40, 44, 48, 52, 56, 60, 64, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144,
+ 149, 153, 157, 161, 165
+};
+
+static const u8 calib_channels_5g_bw_40[] = {
+ 36, 44, 52, 60, 100, 108, 116, 124, 132, 140, 149, 157
+};
+
+static const u8 calib_channels_5g_bw_80[] = {
+ 36, 52, 100, 116, 132, 149
+};
+
+static const u8 calib_channels_5g_bw_160[] = {
+ 36, 100
+};
+
+static const u8 calib_channels_6g_bw_20[] = {
+ 1, 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49, 53, 57, 61, 65, 69, 73, 77, 81, 85, 89, 93,
+ 97, 101, 105, 109, 113, 117, 121, 125, 129, 133, 137, 141, 145, 149, 153, 157, 161, 165,
+ 169, 173, 177, 181, 185, 189, 193, 197, 201, 205, 209, 213, 217, 221, 225, 229, 233
+};
+
+static const u8 calib_channels_6g_bw_40[] = {
+ 1, 9, 17, 25, 33, 41, 49, 57, 65, 73, 81, 89, 97, 105, 113, 121, 129, 137, 145, 153, 161,
+ 169, 177, 185, 193, 201, 209, 217, 225
+};
+
+static const u8 calib_channels_6g_bw_80[] = {
+ 1, 17, 33, 49, 65, 81, 97, 113, 129, 145, 161, 177, 193, 209, 225
+};
+
+static const u8 calib_channels_6g_bw_160[] = {
+ 1, 33, 65, 97, 129, 161, 193, 225
+};
+
+static void cl_calib_dcoc_handle_data(struct cl_hw *cl_hw, s16 calib_temperature, u8 channel, u8 bw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ int lna, chain;
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+ u8 channel_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, channel, bw);
+ struct cl_dcoc_calib *dcoc_calib;
+ struct cl_dcoc_calib *dcoc_calib_dma;
+
+ for (lna = 0; lna < DCOC_LNA_GAIN_NUM; lna++) {
+ riu_chain_for_each(chain) {
+ dcoc_calib =
+ &chip->calib_db.dcoc[tcv_idx][channel_idx][bw][sx][chain][lna];
+ dcoc_calib_dma =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->iq_dcoc_db.dcoc[lna][chain];
+ dcoc_calib->i = dcoc_calib_dma->i;
+ dcoc_calib->q = dcoc_calib_dma->q;
+ }
+ }
+}
+
+static void cl_calib_dcoc_handle_report(struct cl_hw *cl_hw, s16 calib_temperature,
+ int channel, u8 bw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ int lna, chain;
+ struct cl_dcoc_report *dcoc_calib_report_dma;
+ int bw_mhz = BW_TO_MHZ(bw);
+ u8 dcoc_threshold = chip->conf->ci_dcoc_mv_thr[bw];
+ s16 i, q;
+
+ for (lna = 0; lna < DCOC_LNA_GAIN_NUM; lna++) {
+ riu_chain_for_each(chain) {
+ dcoc_calib_report_dma =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->report.dcoc[lna][chain];
+
+ i = (s16)le16_to_cpu(dcoc_calib_report_dma->i_dc);
+ q = (s16)le16_to_cpu(dcoc_calib_report_dma->q_dc);
+
+ if (abs(i) > dcoc_threshold || abs(q) > dcoc_threshold) {
+ cl_dbg_warn(cl_hw,
+ "Warning: DCOC value exceeds threshold [%umV]: channel %u, bw = %u, lna = %u, chain = %u, I[mV] = %d, I[Iter] = %u, Q[mV] = %d, Q[Iter] = %u\n",
+ dcoc_threshold, channel, bw_mhz, lna, chain, i,
+ le16_to_cpu(dcoc_calib_report_dma->i_iterations), q,
+ le16_to_cpu(dcoc_calib_report_dma->q_iterations));
+ chip->calib_db.errors[cl_hw->tcv_idx].dcoc++;
+ }
+ }
+ }
+}
+
+static int cl_calib_dcoc_calibrate_channel(struct cl_hw *cl_hw, u32 channel, u32 bw,
+ bool first_channel)
+{
+ u32 primary = 0;
+ u32 center = 0;
+ enum nl80211_chan_width width = NL80211_CHAN_WIDTH_20;
+ struct cl_calib_params calib_params = {SET_CHANNEL_MODE_CALIB_DCOC, first_channel, 0, 0};
+
+ if (cl_chandef_calc(cl_hw, channel, bw, &width, &primary, &center)) {
+ cl_dbg_err(cl_hw, "cl_chandef_calc failed\n");
+ return -EINVAL;
+ }
+
+ cl_dbg_trace(cl_hw, "\n ------ START CALIB DCOC CHANNEL -----\n");
+ cl_dbg_trace(cl_hw, "channel = %u first_channel = %u\n", channel, first_channel);
+
+ /* Set Channel Mode to DCOC Calibration Mode */
+ return cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center, calib_params);
+}
+
+static void cl_calib_dcoc_average(struct cl_chip *chip, u8 tcv_idx, u8 center,
+ u8 bw, u8 chain, u8 sx, u8 lna)
+{
+ struct cl_dcoc_calib *dcoc_db_left;
+ struct cl_dcoc_calib *dcoc_db_center;
+ struct cl_dcoc_calib *dcoc_db_right;
+ u8 left_idx = cl_calib_dcoc_tcv_channel_to_idx(chip, tcv_idx,
+ calib_channels_6g_plan[center - 1], bw);
+ u8 center_idx = cl_calib_dcoc_tcv_channel_to_idx(chip, tcv_idx,
+ calib_channels_6g_plan[center], bw);
+ u8 right_idx = cl_calib_dcoc_tcv_channel_to_idx(chip, tcv_idx,
+ calib_channels_6g_plan[center + 1], bw);
+
+ dcoc_db_left = &chip->calib_db.dcoc[tcv_idx][left_idx][bw][sx][chain][lna];
+ dcoc_db_center = &chip->calib_db.dcoc[tcv_idx][center_idx][bw][sx][chain][lna];
+ dcoc_db_right = &chip->calib_db.dcoc[tcv_idx][right_idx][bw][sx][chain][lna];
+
+ dcoc_db_center->i = (dcoc_db_left->i + dcoc_db_right->i) / 2;
+ dcoc_db_center->q = (dcoc_db_left->q + dcoc_db_right->q) / 2;
+}
+
+static int cl_calib_dcoc_calibrate_6g(struct cl_hw *cl_hw)
+{
+ int i;
+ u8 chain, lna;
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = tcv_idx;
+ bool first_channel = true;
+ struct cl_chip *chip = cl_hw->chip;
+
+ /* Calibrate channels: 1, 33, 65, 97, 129, 161, 193, 225 */
+ for (i = 0; i < CALIB_CHAN_6G_PLAN; i += 2) {
+ if (cl_hw->conf->ci_cap_bandwidth == CHNL_BW_160 &&
+ (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_6g_plan[i], CHNL_BW_160,
+ first_channel) == 0))
+ first_channel = false;
+
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_6g_plan[i], CHNL_BW_80,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_6g_plan[i], CHNL_BW_20,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ /*
+ * For these channels 17, 49, 81, 113, 145, 177, 209
+ * calculate average of closest neighbors
+ */
+ for (i = 1; i < CALIB_CHAN_6G_PLAN - 1; i += 2)
+ riu_chain_for_each(chain)
+ for (lna = 0; lna < DCOC_LNA_GAIN_NUM; lna++) {
+ cl_calib_dcoc_average(chip, tcv_idx, i, CHNL_BW_80,
+ chain, sx, lna);
+ cl_calib_dcoc_average(chip, tcv_idx, i, CHNL_BW_20,
+ chain, sx, lna);
+ }
+
+ return first_channel;
+}
+
+static int cl_calib_dcoc_calibrate_5g(struct cl_hw *cl_hw)
+{
+ int i;
+ bool first_channel = true;
+
+ if (cl_hw->conf->ci_cap_bandwidth == CHNL_BW_160) {
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, 36, CHNL_BW_160, first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, 100, CHNL_BW_160, first_channel) == 0)
+ first_channel = false;
+ }
+
+ for (i = 0; i < CALIB_CHAN_5G_PLAN; i++) {
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_5g_plan[i], CHNL_BW_80,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_5g_plan[i], CHNL_BW_20,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ return first_channel;
+}
+
+static int cl_calib_dcoc_calibrate_24g(struct cl_hw *cl_hw)
+{
+ int i;
+ bool first_channel = true;
+
+ for (i = 0; i < CALIB_CHAN_24G_MAX; i++) {
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_24g[i], CHNL_BW_40,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_dcoc_calibrate_channel(cl_hw, calib_channels_24g[i], CHNL_BW_20,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ return first_channel;
+}
+
+static void cl_calib_dcoc_calibrate(struct cl_hw *cl_hw)
+{
+ if (cl_band_is_6g(cl_hw))
+ cl_calib_dcoc_calibrate_6g(cl_hw);
+ else if (cl_band_is_5g(cl_hw))
+ cl_calib_dcoc_calibrate_5g(cl_hw);
+ else if (cl_band_is_24g(cl_hw))
+ cl_calib_dcoc_calibrate_24g(cl_hw);
+}
+
+void cl_calib_dcoc_init_calibration(struct cl_hw *cl_hw)
+{
+ u8 tcv_idx = cl_hw->tcv_idx;
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_iq_dcoc_conf *iq_dcoc_conf = &chip->iq_dcoc_conf;
+ u8 fem_mode = cl_hw->fem_mode;
+
+ /* No need to init calibration to non-Olympus phy */
+ if (!IS_REAL_PHY(chip))
+ return;
+ if (cl_hw_is_tcv0(cl_hw) && chip->conf->ci_tcv1_chains_sx0)
+ return;
+
+ if (!iq_dcoc_conf->dcoc_calib_needed[tcv_idx]) {
+ u8 file_num_antennas = iq_dcoc_conf->dcoc_file_num_ant[tcv_idx];
+
+ if (file_num_antennas < cl_hw->num_antennas) {
+ cl_dbg_verbose(cl_hw,
+ "Num of antennas [%u], is larger than DCOC calibration file"
+ " num of antennas [%u], recalibration is needed\n",
+ cl_hw->num_antennas, file_num_antennas);
+ } else {
+ return;
+ }
+ }
+
+ /* Set FEM mode to LNA Bypass Only mode for DCOC Calibration. */
+ cl_fem_set_dcoc_bypass(cl_hw);
+ cl_afe_cfg_calib(chip);
+
+ cl_calib_dcoc_calibrate(cl_hw);
+
+ /* Restore FEM mode to its original mode. */
+ cl_fem_dcoc_restore(cl_hw, fem_mode);
+ cl_afe_cfg_restore(chip);
+
+ iq_dcoc_conf->dcoc_calib_needed[tcv_idx] = false;
+ iq_dcoc_conf->dcoc_file_num_ant[tcv_idx] = cl_hw->num_antennas;
+}
+
+static u8 cl_calib_dcoc_get_chan_idx(const u8 calib_chan_list[], u8 list_len, u8 channel)
+{
+ u8 i = 0;
+
+ for (i = 1; i < list_len; i++)
+ if (calib_chan_list[i] > channel)
+ return (i - 1);
+
+ return (list_len - 1);
+}
+
+static u8 cl_calib_dcoc_convert_to_channel_in_plan(u8 channel, u8 band)
+{
+ u8 idx;
+
+ if (band == BAND_6G) {
+ idx = cl_calib_dcoc_get_chan_idx(calib_channels_6g_plan,
+ ARRAY_SIZE(calib_channels_6g_plan), channel);
+ return calib_channels_6g_plan[idx];
+ }
+
+ idx = cl_calib_dcoc_get_chan_idx(calib_channels_5g_plan,
+ ARRAY_SIZE(calib_channels_5g_plan), channel);
+
+ return calib_channels_5g_plan[idx];
+}
+
+u8 cl_calib_dcoc_channel_bw_to_idx(struct cl_hw *cl_hw, u8 channel, u8 bw)
+{
+ if (cl_band_is_6g(cl_hw)) {
+ if (bw == CHNL_BW_160)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_160,
+ ARRAY_SIZE(calib_channels_6g_bw_160),
+ channel);
+ /*
+ * In case of non runtime calib - channels that don't exist in the plan list will
+ * not be calibrated. Thus, the calib data need to be fetched from a close channel
+ * that was calibrated - AKA "fallback channel".
+ * In this case the channel value should convert to the "fallback channel" that had
+ * been calibrated. The func will return the idx value of the "fallback channel"
+ * instead of the original idx channel.
+ */
+ if (!cl_hw->chip->conf->ce_calib_runtime_en)
+ channel = cl_calib_dcoc_convert_to_channel_in_plan(channel, BAND_6G);
+
+ if (bw == CHNL_BW_20)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_20,
+ ARRAY_SIZE(calib_channels_6g_bw_20),
+ channel);
+
+ if (bw == CHNL_BW_40)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_40,
+ ARRAY_SIZE(calib_channels_6g_bw_40),
+ channel);
+
+ if (bw == CHNL_BW_80)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_80,
+ ARRAY_SIZE(calib_channels_6g_bw_80),
+ channel);
+ }
+
+ if (cl_band_is_5g(cl_hw)) {
+ if (bw == CHNL_BW_160)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_160,
+ ARRAY_SIZE(calib_channels_5g_bw_160),
+ channel);
+
+ if (!cl_hw->chip->conf->ce_calib_runtime_en)
+ channel = cl_calib_dcoc_convert_to_channel_in_plan(channel, BAND_5G);
+
+ if (bw == CHNL_BW_20)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_20,
+ ARRAY_SIZE(calib_channels_5g_bw_20),
+ channel);
+
+ if (bw == CHNL_BW_40)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_40,
+ ARRAY_SIZE(calib_channels_5g_bw_40),
+ channel);
+
+ if (bw == CHNL_BW_80)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_80,
+ ARRAY_SIZE(calib_channels_5g_bw_80),
+ channel);
+ }
+
+ return cl_calib_dcoc_get_chan_idx(calib_channels_24g, ARRAY_SIZE(calib_channels_24g),
+ channel);
+}
+
+void cl_calib_dcoc_fill_data(struct cl_hw *cl_hw, struct cl_iq_dcoc_info *iq_dcoc_db)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ u8 lna = 0, chain = 0;
+ u8 bw = cl_hw->bw;
+ u8 channel_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, cl_hw->channel, bw);
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+
+ for (lna = 0; lna < DCOC_LNA_GAIN_NUM; lna++)
+ riu_chain_for_each(chain)
+ iq_dcoc_db->dcoc[lna][chain] =
+ chip->calib_db.dcoc[tcv_idx][channel_idx][bw][sx][chain][lna];
+}
+
+u8 cl_calib_dcoc_tcv_channel_to_idx(struct cl_chip *chip, u8 tcv_idx, u8 channel, u8 bw)
+{
+ u8 i = 0;
+
+ if (cl_chip_is_6g(chip)) {
+ if (tcv_idx == TCV0) {
+ if (bw == CHNL_BW_20)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_20,
+ ARRAY_SIZE
+ (calib_channels_6g_bw_20),
+ channel);
+
+ if (bw == CHNL_BW_40)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_40,
+ ARRAY_SIZE
+ (calib_channels_6g_bw_40),
+ channel);
+
+ if (bw == CHNL_BW_80)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_80,
+ ARRAY_SIZE
+ (calib_channels_6g_bw_80),
+ channel);
+
+ if (bw == CHNL_BW_160)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_6g_bw_160,
+ ARRAY_SIZE
+ (calib_channels_6g_bw_160),
+ channel);
+ } else if (tcv_idx == TCV1) {
+ if (bw == CHNL_BW_20)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_20,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_20),
+ channel);
+
+ if (bw == CHNL_BW_40)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_40,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_40),
+ channel);
+
+ if (bw == CHNL_BW_80)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_80,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_80),
+ channel);
+
+ if (bw == CHNL_BW_160)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_160,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_160),
+ channel);
+ }
+ } else {
+ if (channel <= NUM_CHANNELS_24G) {
+ for (i = 0; i < CALIB_CHAN_24G_MAX; i++)
+ if (calib_channels_24g[i] == channel)
+ return i;
+ } else {
+ if (bw == CHNL_BW_20)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_20,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_20),
+ channel);
+
+ if (bw == CHNL_BW_40)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_40,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_40),
+ channel);
+
+ if (bw == CHNL_BW_80)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_80,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_80),
+ channel);
+
+ if (bw == CHNL_BW_160)
+ return cl_calib_dcoc_get_chan_idx(calib_channels_5g_bw_160,
+ ARRAY_SIZE
+ (calib_channels_5g_bw_160),
+ channel);
+ }
+ }
+
+ return 0;
+}
+
+void cl_calib_dcoc_handle_set_channel_cfm(struct cl_hw *cl_hw, bool first_channel)
+{
+ struct calib_cfm *dcoc_iq_cfm =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_DCOC];
+ s16 calib_temperature = cl_calib_common_get_temperature(cl_hw, CALIB_CFM_DCOC);
+ u8 channel = cl_hw->channel;
+ u8 bw = cl_hw->bw;
+
+ cl_dbg_trace(cl_hw, "calib_temperature = %d, channel = %u, bw = %u\n", calib_temperature,
+ channel, bw);
+
+ cl_calib_dcoc_handle_data(cl_hw, calib_temperature, channel, bw);
+ cl_calib_dcoc_handle_report(cl_hw, calib_temperature, channel, bw);
+
+ /*
+ * Set the default status to FAIL, to ensure FW is actually changing the value,
+ * if the calibration succeeded.
+ */
+ dcoc_iq_cfm->status = CALIB_FAIL;
+}
+
+static void cl_calib_iq_handle_data(struct cl_hw *cl_hw, s16 calib_temperature, u8 channel,
+ u8 bw, u8 plan_bitmap)
+{
+ int chain;
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+ u8 channel_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, channel, bw);
+ struct cl_iq_calib iq_calib_dma;
+
+ riu_chain_for_each(chain) {
+ if ((plan_bitmap & (1 << chain)) == 0)
+ continue;
+
+ iq_calib_dma = cl_hw->iq_dcoc_data_info.iq_dcoc_data->iq_dcoc_db.iq_tx[chain];
+ cl_hw->chip->calib_db.iq_tx[tcv_idx][channel_idx][bw][sx][chain] = iq_calib_dma;
+
+ iq_calib_dma = cl_hw->iq_dcoc_data_info.iq_dcoc_data->iq_dcoc_db.iq_rx[chain];
+ cl_hw->chip->calib_db.iq_rx[tcv_idx][channel_idx][bw][sx][chain] = iq_calib_dma;
+ }
+}
+
+static void cl_calib_iq_lolc_handle_data(struct cl_hw *cl_hw, s16 calib_temperature,
+ u8 channel, u8 bw, u8 plan_bitmap)
+{
+ int chain;
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+ u8 channel_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, channel, bw);
+ __le32 lolc_calib_dma;
+
+ riu_chain_for_each(chain) {
+ if ((plan_bitmap & (1 << chain)) == 0)
+ continue;
+
+ lolc_calib_dma =
+ cl_hw->iq_dcoc_data_info.iq_dcoc_data->iq_dcoc_db.iq_tx_lolc[chain];
+ cl_hw->chip->calib_db.iq_tx_lolc[tcv_idx][channel_idx][bw][sx][chain] =
+ le32_to_cpu(lolc_calib_dma);
+ }
+}
+
+static void cl_calib_iq_lolc_handle_report(struct cl_hw *cl_hw, s16 calib_temperature,
+ int channel, u8 bw, u8 plan_bitmap)
+{
+ struct cl_iq_dcoc_report *report = &cl_hw->iq_dcoc_data_info.iq_dcoc_data->report;
+ int chain;
+ struct cl_lolc_report lolc_report_dma;
+ int bw_mhz = BW_TO_MHZ(bw);
+ s16 lolc_threshold = cl_hw->chip->conf->ci_lolc_db_thr;
+ s32 lolc_qual = 0;
+
+ riu_chain_for_each(chain) {
+ if ((plan_bitmap & (1 << chain)) == 0)
+ continue;
+
+ lolc_report_dma = report->lolc_report[chain];
+ lolc_qual = (s16)(le16_to_cpu(lolc_report_dma.lolc_qual)) >> 8;
+
+ cl_dbg_trace(cl_hw, "LOLC Quality [chain = %u] = %d, Iter = %u\n",
+ chain, lolc_qual, lolc_report_dma.n_iter);
+
+ if (lolc_qual > lolc_threshold) {
+ cl_dbg_warn(cl_hw,
+ "Warning: LOLC value exceeds threshold [%ddB]: channel %u, "
+ "bw = %u, chain = %u, LOLC[dB] = %d, I[Iter] = %u\n",
+ lolc_threshold, channel, bw_mhz, chain, lolc_qual,
+ lolc_report_dma.n_iter);
+ cl_hw->chip->calib_db.errors[cl_hw->tcv_idx].lolc++;
+ }
+ }
+}
+
+static int cl_calib_iq_calibrate_channel(struct cl_hw *cl_hw, u32 channel, u32 bw,
+ bool first_channel)
+{
+ u32 primary = 0;
+ u32 center = 0;
+ enum nl80211_chan_width width = NL80211_CHAN_WIDTH_20;
+ struct cl_calib_params calib_params = {
+ (SET_CHANNEL_MODE_CALIB_IQ | SET_CHANNEL_MODE_CALIB_LOLC),
+ first_channel, SX_FREQ_OFFSET_Q2, 0
+ };
+
+ /* Convert ant to riu chain in the calib plan_bitmap */
+ calib_params.plan_bitmap =
+ cl_hw_ant_mask_to_riu_chain_mask(cl_hw, cl_hw->mask_num_antennas);
+
+ if (cl_chandef_calc(cl_hw, channel, bw, &width, &primary, &center)) {
+ cl_dbg_err(cl_hw, "cl_chandef_calc failed\n");
+ return -EINVAL;
+ }
+
+ cl_dbg_trace(cl_hw, "\n ------ START CALIB IQ CHANNEL -----\n");
+ cl_dbg_trace(cl_hw, "channel = %u first_channel = %d\n", channel, first_channel);
+
+ /* Set channel mode to LO+IQ calibration mode */
+ return cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center, calib_params);
+}
+
+static u8 cl_calib_iq_convert_plan_to_calib_db_idx(u8 chan_idx_src, u8 bw)
+{
+ u8 shift_idx = 0;
+ /*
+ * Calibration data is fetched from calibrated channels to previous uncalibrated channels in
+ * plan list.
+ *
+ * For example: channel 65 was calibrated due the channels plan list. Calibration data of
+ * channel 65 saved in calib_db struct in the relevant chan idx place due the BW, as the
+ * follow:
+ * chan_idx 16 for BW 20,
+ * chan_idx 8 for BW 40
+ * chan_idx 4 for BW 80
+ * chan_idx 2 for BW 160.
+ *
+ * We want to copy calib data of IQ & LOLC from channel 65 to channel 49. Doing the same
+ * also to the other uncalib channels: 33->17, 65->49, 97->81 etc.
+ *
+ * The chan idx of channel 49 in the calib_db by BW is:
+ * chan_idx 12 for BW 20,
+ * chan_idx 6 for BW 40
+ * chan_idx 3 for BW 80
+ * (no exist chan_idx for BW 160)
+ *
+ * We copy the data in calib_db from idx of channel 65 to the idx of channel 49:
+ * chan_idx 16 to chan_idx 12 (in BW 20)
+ * chan_idx 8 to chan_idx 6 (in BW 40)
+ * chan_idx 4 to chan_idx 3 (in BW 80)
+ *
+ * In general, the dst chan idx will be calculated by;
+ * dst_idx = src_idx - 4 (for BW 20)
+ * dst_idx = src_idx - 2 (for BW 40)
+ * dst_idx = src_idx - 1 (for BW 80)
+ *
+ * The way to calc this is shiftting is by the follow bitmask:
+ * 4 >> bw
+ */
+ shift_idx = 4 >> bw;
+
+ return chan_idx_src - shift_idx;
+}
+
+static void cl_calib_iq_copy_data_to_uncalibrated_channels_6g(struct cl_hw *cl_hw)
+{
+ struct cl_calib_db *calib_db = &cl_hw->chip->calib_db;
+ int i;
+ u8 sx = cl_hw->sx_idx;
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 chan_idx_src = 0;
+ u8 chan_idx_dst = 0;
+ u8 chain = 0;
+ u8 bw = 0;
+
+ /*
+ * Copy iq & lo calib data from 6g list plan calibrate channels: 1, 33, 65, 97, 129, 161,
+ * 193, 22 to uncalibrated channels 17, 49, 81, 113, 145, 177, 209. copy to the correct
+ * channel idx for each different bw
+ */
+ for (i = 1; i < CALIB_CHAN_6G_PLAN - 1; i += 2)
+ riu_chain_for_each(chain)
+ /* Iterate only CHNL_BW_80 and CHNL_BW_20 */
+ for (bw = CHNL_BW_20; bw <= CHNL_BW_80; bw += 2) {
+ chan_idx_src =
+ cl_calib_dcoc_channel_bw_to_idx(cl_hw,
+ calib_channels_6g_plan[i],
+ bw);
+
+ chan_idx_dst =
+ cl_calib_iq_convert_plan_to_calib_db_idx(chan_idx_src, bw);
+ memcpy(&calib_db->iq_tx[tcv_idx][chan_idx_dst][bw][sx][chain],
+ &calib_db->iq_tx[tcv_idx][chan_idx_src][bw][sx][chain],
+ sizeof(struct cl_iq_calib));
+ memcpy(&calib_db->iq_rx[tcv_idx][chan_idx_dst][bw][sx][chain],
+ &calib_db->iq_rx[tcv_idx][chan_idx_src][bw][sx][chain],
+ sizeof(struct cl_iq_calib));
+ calib_db->iq_tx_lolc[tcv_idx][chan_idx_dst][bw][sx][chain] =
+ calib_db->iq_tx_lolc[tcv_idx][chan_idx_src][bw][sx][chain];
+ }
+}
+
+static bool cl_calib_iq_calibrate_6g(struct cl_hw *cl_hw)
+{
+ int i;
+ bool first_channel = true;
+
+ /* Calibrate channels: 1, 33, 65, 97, 129, 161, 193, 225 */
+ for (i = 0; i < CALIB_CHAN_6G_PLAN; i += 2) {
+ if ((cl_calib_iq_calibrate_channel(cl_hw, calib_channels_6g_plan[i], CHNL_BW_160,
+ first_channel) == 0))
+ first_channel = false;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, calib_channels_6g_plan[i], CHNL_BW_80,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, calib_channels_6g_plan[i], CHNL_BW_20,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ /*
+ * For these channels 17, 49, 81, 113, 145, 177, 209
+ * copy data of next neighbor
+ */
+ cl_calib_iq_copy_data_to_uncalibrated_channels_6g(cl_hw);
+
+ return first_channel;
+}
+
+static bool cl_calib_iq_calibrate_5g(struct cl_hw *cl_hw)
+{
+ int i;
+ bool first_channel = true;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, 36, CHNL_BW_160, first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, 100, CHNL_BW_160, first_channel) == 0)
+ first_channel = false;
+
+ for (i = 0; i < CALIB_CHAN_5G_PLAN; i++) {
+ if (cl_calib_iq_calibrate_channel(cl_hw, calib_channels_5g_plan[i], CHNL_BW_80,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, calib_channels_5g_plan[i], CHNL_BW_20,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ return first_channel;
+}
+
+static bool cl_calib_iq_calibrate_24g(struct cl_hw *cl_hw)
+{
+ int i;
+ bool first_channel = true;
+
+ if (cl_hw->chip->conf->ce_production_mode) {
+ if (cl_calib_iq_calibrate_channel(cl_hw, 1, CHNL_BW_160,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, 1, CHNL_BW_80,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ for (i = 0; i < CALIB_CHAN_24G_MAX; i++) {
+ if (cl_calib_iq_calibrate_channel(cl_hw, calib_channels_24g[i], CHNL_BW_40,
+ first_channel) == 0)
+ first_channel = false;
+
+ if (cl_calib_iq_calibrate_channel(cl_hw, calib_channels_24g[i], CHNL_BW_20,
+ first_channel) == 0)
+ first_channel = false;
+ }
+
+ return first_channel;
+}
+
+static void cl_calib_iq_calibrate(struct cl_hw *cl_hw)
+{
+ if (cl_band_is_6g(cl_hw))
+ cl_calib_iq_calibrate_6g(cl_hw);
+ else if (cl_band_is_5g(cl_hw))
+ cl_calib_iq_calibrate_5g(cl_hw);
+ else if (cl_band_is_24g(cl_hw))
+ cl_calib_iq_calibrate_24g(cl_hw);
+}
+
+static void cl_calib_iq_init_calibration_tcv(struct cl_hw *cl_hw)
+{
+ u8 tcv_idx = cl_hw->tcv_idx;
+
+ cl_calib_iq_calibrate(cl_hw);
+
+ cl_hw->chip->iq_dcoc_conf.iq_file_num_ant[tcv_idx] = cl_hw->num_antennas;
+}
+
+void cl_calib_restore_channel(struct cl_hw *cl_hw, struct cl_calib_iq_restore *iq_restore)
+{
+ u8 bw = iq_restore->bw;
+ u32 primary = iq_restore->primary;
+ u32 center = iq_restore->center;
+ u8 channel = iq_restore->channel;
+
+ cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center, CL_CALIB_PARAMS_DEFAULT_STRUCT);
+}
+
+void cl_calib_save_channel(struct cl_hw *cl_hw, struct cl_calib_iq_restore *iq_restore)
+{
+ iq_restore->bw = cl_hw->bw;
+ iq_restore->primary = cl_hw->primary_freq;
+ iq_restore->center = cl_hw->center_freq;
+ iq_restore->channel = ieee80211_frequency_to_channel(cl_hw->primary_freq);
+
+ cl_dbg_chip_trace(cl_hw, "bw = %u, primary = %d, center = %d, channel = %u\n",
+ iq_restore->bw, iq_restore->primary,
+ iq_restore->center, iq_restore->channel);
+}
+
+int cl_calib_iq_set_idle(struct cl_hw *cl_hw, bool idle)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
+ struct cl_hw *cl_hw_tcv1 = chip->cl_hw_tcv1;
+ bool tcv0_en = cl_chip_is_tcv0_enabled(chip) && cl_radio_is_on(cl_hw_tcv0);
+ bool tcv1_en = cl_chip_is_tcv1_enabled(chip) && cl_radio_is_on(cl_hw_tcv1);
+
+ if (!idle) {
+ if (tcv1_en)
+ cl_msg_tx_set_idle(cl_hw_tcv1, MAC_ACTIVE, false);
+
+ if (tcv0_en)
+ cl_msg_tx_set_idle(cl_hw_tcv0, MAC_ACTIVE, false);
+
+ return 0;
+ }
+
+ if (tcv1_en)
+ cl_msg_tx_idle_async(cl_hw_tcv1, false);
+
+ if (tcv0_en)
+ cl_msg_tx_set_idle(cl_hw_tcv0, MAC_IDLE_SYNC, false);
+
+ cl_dbg_info(cl_hw, "idle_async_set = %u\n", cl_hw->idle_async_set);
+
+ if (wait_event_timeout(cl_hw->wait_queue, !cl_hw->idle_async_set,
+ CL_MSG_CFM_TIMEOUT_JIFFIES))
+ return 0;
+
+ cl_dbg_err(cl_hw, "Timeout occurred - MM_IDLE_ASYNC_IND\n");
+ return -ETIMEDOUT;
+}
+
+bool cl_calib_iq_calibration_needed(struct cl_hw *cl_hw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_iq_dcoc_conf *iq_dcoc_conf = &chip->iq_dcoc_conf;
+ bool calib_needed = false;
+
+ if (!IS_REAL_PHY(chip))
+ return false;
+
+ if (cl_hw->chip->conf->ce_calib_runtime_en)
+ return false;
+
+ if (cl_hw_is_tcv0(cl_hw) && chip->conf->ci_tcv1_chains_sx0)
+ return false;
+
+ if (cl_chip_is_tcv0_enabled(chip)) {
+ u8 num_antennas_tcv0 = chip->cl_hw_tcv0->num_antennas;
+
+ if (iq_dcoc_conf->iq_file_num_ant[TCV0] < num_antennas_tcv0 &&
+ !chip->conf->ci_tcv1_chains_sx0) {
+ cl_dbg_verbose(cl_hw,
+ "Num of antennas [%u], is larger than LOLC Calibration File "
+ "num of antennas [%u], recalibration is needed\n",
+ num_antennas_tcv0, iq_dcoc_conf->iq_file_num_ant[TCV0]);
+ calib_needed = true;
+ }
+ }
+
+ if (cl_chip_is_tcv1_enabled(chip)) {
+ u8 num_antennas_tcv1 = chip->cl_hw_tcv1->num_antennas;
+
+ if (iq_dcoc_conf->iq_file_num_ant[TCV1] < num_antennas_tcv1) {
+ cl_dbg_verbose(cl_hw,
+ "Num of antennas [%u], is larger than LOLC Calibration File "
+ "num of antennas [%u], recalibration is needed\n",
+ num_antennas_tcv1, iq_dcoc_conf->iq_file_num_ant[TCV1]);
+ calib_needed = true;
+ }
+ }
+
+ return calib_needed;
+}
+
+void cl_calib_iq_init_calibration(struct cl_hw *cl_hw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ u8 fem_mode = cl_hw->fem_mode;
+ struct cl_iq_dcoc_conf *iq_dcoc_conf = &chip->iq_dcoc_conf;
+ struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
+ struct cl_hw *cl_hw_tcv1 = chip->cl_hw_tcv1;
+ struct cl_calib_iq_restore iq_restore_tcv0;
+ struct cl_calib_iq_restore iq_restore_tcv1;
+ u8 save_tcv0_needed = cl_hw_tcv0 && cl_hw_tcv0->primary_freq &&
+ !chip->conf->ci_tcv1_chains_sx0;
+ u8 save_tcv1_needed = cl_hw_tcv1 && cl_hw_tcv1->primary_freq;
+
+ if (save_tcv0_needed)
+ cl_calib_save_channel(cl_hw_tcv0, &iq_restore_tcv0);
+
+ if (save_tcv1_needed)
+ cl_calib_save_channel(cl_hw_tcv1, &iq_restore_tcv1);
+
+ cl_fem_set_iq_bypass(cl_hw);
+ cl_afe_cfg_calib(chip);
+
+ if (cl_hw_tcv0 &&
+ (chip->iq_dcoc_conf.force_calib ||
+ (iq_dcoc_conf->iq_file_num_ant[TCV0] < cl_hw_tcv0->num_antennas &&
+ !chip->conf->ci_tcv1_chains_sx0))) {
+ cl_calib_iq_init_calibration_tcv(cl_hw_tcv0);
+ }
+
+ if (cl_hw_tcv1 &&
+ (chip->iq_dcoc_conf.force_calib ||
+ iq_dcoc_conf->iq_file_num_ant[TCV1] < cl_hw_tcv1->num_antennas)) {
+ cl_calib_iq_init_calibration_tcv(cl_hw_tcv1);
+ }
+
+ cl_fem_iq_restore(cl_hw, fem_mode);
+ cl_afe_cfg_restore(chip);
+
+ if (save_tcv0_needed)
+ cl_calib_restore_channel(cl_hw_tcv0, &iq_restore_tcv0);
+
+ if (save_tcv1_needed)
+ cl_calib_restore_channel(cl_hw_tcv1, &iq_restore_tcv1);
+}
+
+void cl_calib_iq_fill_data(struct cl_hw *cl_hw, struct cl_iq_calib *iq_data,
+ struct cl_iq_calib *iq_chip_data)
+{
+ u8 ant = 0;
+
+ for (ant = 0; ant < MAX_ANTENNAS; ant++) {
+ iq_data[ant].coef0 = iq_chip_data[ant].coef0;
+ iq_data[ant].coef1 = iq_chip_data[ant].coef1;
+ iq_data[ant].coef2 = iq_chip_data[ant].coef2;
+ iq_data[ant].gain = iq_chip_data[ant].gain;
+ }
+}
+
+void cl_calib_iq_lolc_fill_data(struct cl_hw *cl_hw, __le32 *iq_lolc)
+{
+ struct cl_calib_db *calib_db = &cl_hw->chip->calib_db;
+ u8 ant = 0;
+ u8 bw = cl_hw->bw;
+ u8 chan_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, cl_hw->channel, bw);
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+
+ for (ant = 0; ant < MAX_ANTENNAS; ant++)
+ iq_lolc[ant] = cpu_to_le32(calib_db->iq_tx_lolc[tcv_idx][chan_idx][bw][sx][ant]);
+}
+
+void cl_calib_iq_handle_set_channel_cfm(struct cl_hw *cl_hw, u8 plan_bitmap)
+{
+ struct calib_cfm *dcoc_iq_cfm =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_IQ];
+ s16 calib_temperature = cl_calib_common_get_temperature(cl_hw, CALIB_CFM_IQ);
+ u8 channel = cl_hw->channel;
+ u8 bw = cl_hw->bw;
+
+ CL_DBG(cl_hw, DBG_LVL_TRACE, "calib_temperature = %d, channel = %u, bw = %u\n",
+ calib_temperature, channel, bw);
+
+ cl_calib_iq_handle_data(cl_hw, calib_temperature, channel, bw, plan_bitmap);
+
+ /*
+ * Set the default status to FAIL, to ensure FW is actually changing the value,
+ * if the calibration succeeded.
+ */
+ dcoc_iq_cfm->status = CALIB_FAIL;
+}
+
+void cl_calib_iq_lolc_handle_set_channel_cfm(struct cl_hw *cl_hw, u8 plan_bitmap)
+{
+ struct calib_cfm *dcoc_iq_cfm =
+ &cl_hw->iq_dcoc_data_info.iq_dcoc_data->dcoc_iq_cfm[CALIB_CFM_IQ];
+ s16 calib_temperature = cl_calib_common_get_temperature(cl_hw, CALIB_CFM_IQ);
+ u8 channel = cl_hw->channel;
+ u8 bw = cl_hw->bw;
+
+ cl_dbg_trace(cl_hw, "calib_temperature = %d, channel = %u, bw = %u\n", calib_temperature,
+ channel, bw);
+
+ cl_calib_iq_lolc_handle_data(cl_hw, calib_temperature, channel, bw, plan_bitmap);
+ cl_calib_iq_lolc_handle_report(cl_hw, calib_temperature, channel, bw, plan_bitmap);
+
+ /*
+ * Set the default status to FAIL, to ensure FW is actually changing the value,
+ * if the calibration succeeded.
+ */
+ dcoc_iq_cfm->status = CALIB_FAIL;
+}
+
+void cl_calib_iq_get_tone_vector(struct cl_hw *cl_hw, __le16 *tone_vector)
+{
+ u8 tone = 0;
+ u8 *vector_ptr = NULL;
+
+ switch (cl_hw->bw) {
+ case CHNL_BW_20:
+ vector_ptr = cl_hw->conf->ci_calib_conf_tone_vector_20bw;
+ break;
+ case CHNL_BW_40:
+ vector_ptr = cl_hw->conf->ci_calib_conf_tone_vector_40bw;
+ break;
+ case CHNL_BW_80:
+ vector_ptr = cl_hw->conf->ci_calib_conf_tone_vector_80bw;
+ break;
+ case CHNL_BW_160:
+ vector_ptr = cl_hw->conf->ci_calib_conf_tone_vector_160bw;
+ break;
+ default:
+ vector_ptr = cl_hw->conf->ci_calib_conf_tone_vector_20bw;
+ break;
+ }
+
+ for (tone = 0; tone < IQ_NUM_TONES_REQ; tone++)
+ tone_vector[tone] = cpu_to_le16((u16)vector_ptr[tone]);
+}
+
+void cl_calib_iq_init_production(struct cl_hw *cl_hw)
+{
+ struct cl_hw *cl_hw_other = NULL;
+ struct cl_chip *chip = cl_hw->chip;
+
+ if (!cl_chip_is_both_enabled(chip) ||
+ (cl_hw_is_tcv1(cl_hw) && chip->conf->ci_tcv1_chains_sx0)) {
+ if (cl_calib_iq_calibration_needed(cl_hw))
+ cl_calib_iq_init_calibration(cl_hw);
+ return;
+ }
+
+ cl_hw_other = cl_hw_other_tcv(cl_hw);
+ if (!cl_hw_other)
+ return;
+
+ if (cl_hw_other->iq_cal_ready) {
+ cl_hw_other->iq_cal_ready = false;
+ cl_calib_iq_init_calibration(cl_hw);
+ } else if (cl_calib_iq_calibration_needed(cl_hw)) {
+ cl_hw->iq_cal_ready = true;
+ cl_dbg_verbose(cl_hw, "IQ Calibration needed. Wait for both TCVs before starting "
+ "calibration.\n");
+ }
+}
+
+/*
+ * CL80x0: TCV0 - 5g, TCV1 - 24g
+ * ==============================================
+ * 50 48 46 44 42 40 38 36 --> Start 5g
+ * 100 64 62 60 58 56 54 52
+ * 116 114 112 110 108 106 104 102
+ * 134 132 128 126 124 122 120 118
+ * 153 151 149 144 142 140 138 136
+ * 3 2 1 165 161 159 157 155 --> Start 24g
+ * 11 10 9 8 7 6 5 4
+ * 14 13 12
+ */
+
+/*
+ * CL80x6: TCV0 - 6g, TCV1 - 5g
+ * ==============================================
+ * 25 21 17 13 9 5 2 1 --> Start 6g
+ * 57 53 49 45 41 37 33 29
+ * 89 85 81 77 73 69 65 61
+ * 121 117 113 109 105 101 97 93
+ * 153 147 143 139 135 131 127 123
+ * 185 181 177 173 169 165 161 157
+ * 217 213 209 205 201 197 193 189
+ * 42 40 38 36 233 229 225 221 --> Start 5g
+ * 58 56 54 52 50 48 46 44
+ * 108 106 104 102 100 64 62 60
+ * 124 122 120 118 116 114 112 110
+ * 142 140 138 136 134 132 128 126
+ * 161 159 157 155 153 151 149 144
+ * 165
+ */
+
+#define BITMAP_80X0_START_TCV0 0
+#define BITMAP_80X0_MAX_TCV0 NUM_CHANNELS_5G
+
+#define BITMAP_80X0_START_TCV1 NUM_CHANNELS_5G
+#define BITMAP_80X0_MAX_TCV1 (NUM_CHANNELS_5G + NUM_CHANNELS_24G)
+
+#define BITMAP_80X6_START_TCV0 0
+#define BITMAP_80X6_MAX_TCV0 NUM_BITMAP_CHANNELS_6G
+
+#define BITMAP_80X6_START_TCV1 NUM_BITMAP_CHANNELS_6G
+#define BITMAP_80X6_MAX_TCV1 (NUM_BITMAP_CHANNELS_6G + NUM_CHANNELS_5G)
+
+#define INVALID_ADDR 0xffff
+
+static u8 cl_get_bitmap_start_tcv1(struct cl_chip *chip)
+{
+ if (cl_chip_is_6g(chip))
+ return BITMAP_80X6_START_TCV1;
+ else
+ return BITMAP_80X0_START_TCV1;
+}
+
+static u8 cl_idx_to_arr_offset(u8 idx)
+{
+ /* Divide by 8 for array index */
+ return idx >> 3;
+}
+
+static u8 cl_idx_to_bit_offset(u8 idx)
+{
+ /* Reminder is for bit index (assummed array of u8) */
+ return idx & 0x07;
+}
+
+static const u8 bits_cnt_table256[] = {
+ 0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4,
+ 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
+ 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
+ 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
+ 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
+ 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
+ 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
+ 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
+ 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
+ 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
+ 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
+ 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
+ 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
+ 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
+ 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
+ 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8
+};
+
+static bool cl_is_vector_unset(const u8 *bitmap)
+{
+ /* Check bitmap is unset i.e. all values are CURR_BMP_UNSET */
+ u8 empty_bitmap[BIT_MAP_SIZE] = {0};
+
+ return !memcmp(bitmap, empty_bitmap, BIT_MAP_SIZE);
+}
+
+static bool cl_bitmap_test_bit_idx(const u8 *bitmap, u8 idx)
+{
+ /* Check bit at a given index is set i.e. 1 */
+ u8 arr_idx = cl_idx_to_arr_offset(idx), bit_idx = cl_idx_to_bit_offset(idx);
+
+ if (arr_idx >= BIT_MAP_SIZE)
+ return false;
+
+ /* Convert non-zero to true and zero to false */
+ return !!(bitmap[arr_idx] & BIT(bit_idx));
+}
+
+static void cl_bitmap_shift(u8 *bitmap, u8 shft)
+{
+ /* Shifts an array of byte of size len by shft number of bits to the left */
+ u8 bitmap_tmp[BIT_MAP_SIZE] = {0};
+ u8 msb_shifts = shft % 8;
+ u8 lsb_shifts = 8 - msb_shifts;
+ u8 byte_shift = shft / 8;
+ u8 last_byte = BIT_MAP_SIZE - byte_shift - 1;
+ u8 msb_idx;
+ u8 i;
+
+ memcpy(bitmap_tmp, bitmap, BIT_MAP_SIZE);
+ memset(bitmap, 0, BIT_MAP_SIZE);
+
+ for (i = 0; i < BIT_MAP_SIZE; i++) {
+ if (i <= last_byte) {
+ msb_idx = i + byte_shift;
+ bitmap[i] = bitmap_tmp[msb_idx] >> msb_shifts;
+ if (i != last_byte)
+ bitmap[i] |= bitmap_tmp[msb_idx + 1] << lsb_shifts;
+ }
+ }
+}
+
+static bool cl_bitmap_set_bit_idx(struct cl_hw *cl_hw, u8 *bitmap, u8 bitmap_size, u8 idx)
+{
+ /* Set bit at a given index */
+ u8 arr_idx = cl_idx_to_arr_offset(idx), bit_idx = cl_idx_to_bit_offset(idx);
+
+ if (arr_idx >= bitmap_size) {
+ cl_dbg_err(cl_hw, "invalid arr_idx (%u)\n", arr_idx);
+ return false;
+ }
+
+ bitmap[arr_idx] |= BIT(bit_idx);
+ return true;
+}
+
+static u16 cl_bitmap_look_lsb_up(struct cl_hw *cl_hw, u8 *bitmap, u16 idx, bool ext)
+{
+ /* Find closest set bit with index higher than idx inside bitmap */
+ u16 curr_idx = idx;
+ u8 curr = 0;
+ u32 chan_num = ext ? NUM_EXT_CHANNELS_6G : cl_channel_num(cl_hw);
+
+ while (++curr_idx < chan_num) {
+ curr = bitmap[cl_idx_to_arr_offset(curr_idx)];
+ if (curr & (1ULL << cl_idx_to_bit_offset(curr_idx)))
+ return curr_idx;
+ }
+
+ /* No matching bit found - return original index */
+ return idx;
+}
+
+static u16 bitmap_look_msb_down(struct cl_hw *cl_hw, u8 *bitmap, u16 idx, bool ext)
+{
+ /* Find closest set bit with index lower than idx inside bitmap */
+ u16 curr_idx = idx;
+ u8 curr = 0;
+ u32 chan_num = ext ? NUM_EXT_CHANNELS_6G : cl_channel_num(cl_hw);
+
+ if (idx >= chan_num) {
+ cl_dbg_err(cl_hw, "Invalid channel index [%u]\n", idx);
+ return idx;
+ }
+
+ while (curr_idx-- != 0) {
+ curr = bitmap[cl_idx_to_arr_offset(curr_idx)];
+ if (curr & (1ULL << cl_idx_to_bit_offset(curr_idx)))
+ return curr_idx;
+ }
+
+ /* No matching bit found - return original index */
+ return idx;
+}
+
+static u8 cl_address_offset_tcv1(struct cl_hw *cl_hw)
+{
+ /* Calculate eeprom calibration data offset for tcv1 */
+ struct cl_chip *chip = cl_hw->chip;
+ u8 i, cnt = 0;
+ u8 bitmap[BIT_MAP_SIZE] = {0};
+
+ if (cl_e2p_read(chip, bitmap, BIT_MAP_SIZE, ADDR_CALIB_POWER_CHAN_BMP))
+ return 0;
+
+ for (i = 0; i < cl_get_bitmap_start_tcv1(chip); i++)
+ cnt += cl_bitmap_test_bit_idx(bitmap, i);
+
+ return cnt;
+}
+
+static int cl_point_idx_to_address(struct cl_hw *cl_hw, u8 *bitmap, struct point *pt)
+{
+ /* Calculate eeprom address for a given idx and phy (initiated point) */
+ u8 i, cnt = 0;
+
+ pt->addr = INVALID_ADDR;
+
+ if (!cl_bitmap_test_bit_idx(bitmap, pt->idx))
+ return 0;
+
+ if (pt->phy >= MAX_ANTENNAS) {
+ cl_dbg_err(cl_hw, "Invalid phy number %u", pt->phy);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < pt->idx; i++)
+ cnt += cl_bitmap_test_bit_idx(bitmap, i);
+
+ if (cl_hw_is_tcv1(cl_hw))
+ cnt += cl_address_offset_tcv1(cl_hw);
+
+ pt->addr = ADDR_CALIB_POWER_PHY +
+ sizeof(struct eeprom_phy_calib) * (cnt * MAX_ANTENNAS + pt->phy);
+
+ return 0;
+}
+
+static bool cl_linear_equation_signed(struct cl_hw *cl_hw, const u16 x, s8 *y,
+ const u16 x0, const s8 y0, const u16 x1, const s8 y1)
+{
+ /* Calculate y given to points (x0,y0) and (x1,y1) and x */
+ s32 numerator = (x - x0) * (y1 - y0);
+ s32 denominator = x1 - x0;
+
+ if (unlikely(!denominator)) {
+ cl_dbg_err(cl_hw, "zero denominator\n");
+ return false;
+ }
+
+ *y = (s8)(y0 + DIV_ROUND_CLOSEST(numerator, denominator));
+
+ return true;
+}
+
+static void cl_extend_bitmap_6g(struct cl_hw *cl_hw, u8 *bitmap, u8 *ext_bitmap)
+{
+ u8 i, ext_idx;
+
+ for (i = 0; i < cl_channel_num(cl_hw); ++i) {
+ if (cl_bitmap_test_bit_idx(bitmap, i)) {
+ ext_idx = CHAN_BITMAP_IDX_6G_2_EXT_IDX(i);
+ cl_bitmap_set_bit_idx(cl_hw, ext_bitmap, EXT_BIT_MAP_SIZE, ext_idx);
+ }
+ }
+}
+
+static bool cl_calculate_calib(struct cl_hw *cl_hw, u8 *bitmap,
+ struct point *p0, struct point *p1, struct point *p2)
+{
+ /* Main interpolation/extrapolation function */
+ bool calc_succsess = false, use_ext = false;
+ u16 freq0, freq1, freq2;
+ u8 e2p_ext_bitmap[EXT_BIT_MAP_SIZE] = {0};
+ u8 *bitmap_to_use = bitmap;
+
+ if (unlikely(cl_is_vector_unset(bitmap)))
+ return false;
+
+ /*
+ * In case that the band is 6g and the channel index wasn't found, it might be because
+ * the channel is missing from the original bitmap (that includes only 20MHz channels).
+ * In case of center channels in 40/180/160MHz, it can't be found in the original bitmap
+ * and therefore we need to extend the bitmap to include these channels in order to perform
+ * interpolation on it.
+ */
+ if (cl_band_is_6g(cl_hw) && p0->idx == INVALID_CHAN_IDX) {
+ p0->idx = cl_channel_to_ext_index_6g(cl_hw, p0->chan);
+ cl_extend_bitmap_6g(cl_hw, bitmap, e2p_ext_bitmap);
+ bitmap_to_use = e2p_ext_bitmap;
+ use_ext = true;
+ }
+
+ p1->idx = cl_bitmap_look_lsb_up(cl_hw, bitmap_to_use, p0->idx, use_ext);
+ p2->idx = bitmap_look_msb_down(cl_hw, bitmap_to_use, p0->idx, use_ext);
+
+ /* Invalid case */
+ if (p1->idx == p0->idx && p2->idx == p0->idx) {
+ cl_dbg_err(cl_hw, "Invalid index %u or bad bit map\n", p0->idx);
+ return false;
+ }
+
+ /* Extrapolation case */
+ if (p1->idx == p0->idx)
+ p1->idx = bitmap_look_msb_down(cl_hw, bitmap_to_use, p2->idx, use_ext);
+ if (p2->idx == p0->idx)
+ p2->idx = cl_bitmap_look_lsb_up(cl_hw, bitmap_to_use, p1->idx, use_ext);
+
+ if (use_ext) {
+ /* Convert indices from extended bitmap to eeprom bitmap */
+ p1->idx = CHAN_EXT_IDX_6G_2_BITMAP_IDX(p1->idx);
+ p2->idx = CHAN_EXT_IDX_6G_2_BITMAP_IDX(p2->idx);
+ }
+
+ /* Address from index */
+ if (cl_point_idx_to_address(cl_hw, bitmap, p1) || p1->addr == INVALID_ADDR) {
+ cl_dbg_err(cl_hw, "Point calculation failed\n");
+ return false;
+ }
+
+ if (cl_point_idx_to_address(cl_hw, bitmap, p2) || p2->addr == INVALID_ADDR) {
+ cl_dbg_err(cl_hw, "Point calculation failed\n");
+ return false;
+ }
+
+ /* Read from eeprom */
+ if (cl_e2p_read(cl_hw->chip, (u8 *)&p1->calib, sizeof(struct eeprom_phy_calib), p1->addr))
+ return false;
+
+ /* No interpolation required */
+ if (p1->addr == p2->addr) {
+ p0->calib = p1->calib;
+ return true;
+ }
+
+ /* Interpolation or extrapolation is required - read from eeprom */
+ if (cl_e2p_read(cl_hw->chip, (u8 *)&p2->calib, sizeof(struct eeprom_phy_calib), p2->addr))
+ return false;
+
+ freq0 = (use_ext ? cl_channel_ext_idx_to_freq_6g(cl_hw, p0->idx) :
+ cl_channel_idx_to_freq(cl_hw, p0->idx));
+ freq1 = cl_channel_idx_to_freq(cl_hw, p1->idx);
+ freq2 = cl_channel_idx_to_freq(cl_hw, p2->idx);
+
+ /* Interpolate/extrapolate target power */
+ calc_succsess = cl_linear_equation_signed(cl_hw,
+ freq0, &p0->calib.pow,
+ freq1, p1->calib.pow,
+ freq2, p2->calib.pow);
+
+ /* Interpolate/extrapolate power offset */
+ calc_succsess = calc_succsess && cl_linear_equation_signed(cl_hw,
+ freq0, &p0->calib.offset,
+ freq1, p1->calib.offset,
+ freq2, p2->calib.offset);
+
+ /* Interpolate/extrapolate calibration temperature */
+ calc_succsess = calc_succsess && cl_linear_equation_signed(cl_hw,
+ freq0, &p0->calib.tmp,
+ freq1, p1->calib.tmp,
+ freq2, p2->calib.tmp);
+
+ if (unlikely(!calc_succsess)) {
+ cl_dbg_err(cl_hw,
+ "Calc failed: freq0 %u idx0 %u%s, freq1 %u idx1 %u, freq2 %u idx2 %u\n",
+ freq0, p0->idx, use_ext ? " (ext)" : "",
+ freq1, p1->idx, freq2, p2->idx);
+ return false;
+ }
+
+ return true;
+}
+
+static int cl_read_validate_vector_bitmap(struct cl_hw *cl_hw, u8 *bitmap)
+{
+ struct cl_chip *chip = cl_hw->chip;
+
+ if (cl_e2p_read(chip, bitmap, BIT_MAP_SIZE, ADDR_CALIB_POWER_CHAN_BMP))
+ return -1;
+
+ /* Test if e2p was read succsefull since it is not ALL EMPTY */
+ if (cl_is_vector_unset(bitmap)) {
+ cl_dbg_err(cl_hw, "Vector not ready\n");
+ return -EPERM;
+ }
+
+ if (cl_hw_is_tcv1(cl_hw)) {
+ u8 bitmap_start = cl_get_bitmap_start_tcv1(chip);
+
+ cl_bitmap_shift(bitmap, bitmap_start);
+ }
+
+ return 0;
+}
+
+static int cl_read_or_interpolate_point(struct cl_hw *cl_hw, u8 *bitmap, struct point *p0)
+{
+ struct point p1 = {.phy = p0->phy};
+ struct point p2 = {.phy = p0->phy};
+ struct point tmp_pt = *p0;
+
+ /* Invalid address = no physical address was allocated to this channel */
+ if (tmp_pt.addr != INVALID_ADDR) {
+ if (cl_e2p_read(cl_hw->chip, (u8 *)&tmp_pt.calib,
+ sizeof(struct eeprom_phy_calib), tmp_pt.addr))
+ return -1;
+ } else {
+ /* Interpolate */
+ if (!cl_calculate_calib(cl_hw, bitmap, &tmp_pt, &p1, &p2)) {
+ cl_dbg_err(cl_hw, "Interpolation Error\n");
+ return -EFAULT;
+ }
+ }
+
+ if (tmp_pt.calib.pow == 0 && tmp_pt.calib.offset == 0 && tmp_pt.calib.tmp == 0) {
+ u16 freq = ieee80211_channel_to_frequency(tmp_pt.chan, cl_hw->nl_band);
+
+ cl_dbg_err(cl_hw, "Verify calibration point: addr %x, idx %u, freq %u, phy %u\n",
+ tmp_pt.addr, tmp_pt.idx, freq, tmp_pt.phy);
+ /* *Uninitiated eeprom value */
+ return -EINVAL;
+ }
+
+ /* Now p0 will contain "Valid" calculations of calib" */
+ p0->calib = tmp_pt.calib;
+ return 0;
+}
+
+static void cl_calib_power_reset(struct cl_hw *cl_hw)
+{
+ u8 ch_idx;
+ u16 phy;
+ u32 chan_num = cl_band_is_6g(cl_hw) ? NUM_EXT_CHANNELS_6G : cl_channel_num(cl_hw);
+ static const struct cl_tx_power_info default_info = {
+ .power = UNCALIBRATED_POWER,
+ .offset = UNCALIBRATED_POWER_OFFSET,
+ .temperature = UNCALIBRATED_TEMPERATURE
+ };
+
+ /* Initiate tx_pow_info struct to default values */
+ for (ch_idx = 0; ch_idx < chan_num; ch_idx++)
+ for (phy = 0; phy < MAX_ANTENNAS; phy++)
+ cl_hw->tx_pow_info[ch_idx][phy] = default_info;
+}
+
+static void cl_calib_fill_power_info(struct cl_hw *cl_hw, u8 chan_idx, u8 ant,
+ struct point *point)
+{
+ cl_hw->tx_pow_info[chan_idx][ant].power = point->calib.pow;
+ cl_hw->tx_pow_info[chan_idx][ant].offset = point->calib.offset;
+ cl_hw->tx_pow_info[chan_idx][ant].temperature = point->calib.tmp;
+}
+
+#define PHY0_OFFSET_FIX_Q2 -8 /* -2db */
+#define PHY3_OFFSET_FIX_Q2 14 /* +3.5db */
+
+static void cl_calib_phy_offset_adjust(struct cl_hw *cl_hw, u8 eeprom_version,
+ u8 phy, struct point *point)
+{
+ /*
+ * Work around:
+ * Add 3.5dB offset to PHY3 if EEPROM version is 0.
+ * Decrease 2dB offset to all PHYs if EEPROM version is 1.
+ */
+ if (!cl_chip_is_6g(cl_hw->chip)) {
+ if (cl_band_is_5g(cl_hw) && eeprom_version == 0 && phy == 3)
+ point->calib.offset += PHY3_OFFSET_FIX_Q2;
+ else if (cl_band_is_24g(cl_hw) && eeprom_version == 1)
+ point->calib.offset += PHY0_OFFSET_FIX_Q2;
+ }
+}
+
+void cl_calib_power_read(struct cl_hw *cl_hw)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ int ret;
+ u8 bitmap[BIT_MAP_SIZE] = {0};
+ struct point curr_point = {0};
+ u8 *phy = &curr_point.phy;
+ u8 *ch_idx = &curr_point.idx;
+ u8 ext_chan_idx = 0;
+ u8 ant;
+ u8 eeprom_version = chip->eeprom_cache->general.version;
+ u32 tmp_freq = 0;
+
+ /* Initiate tx_pow_info struct to default values */
+ cl_calib_power_reset(cl_hw);
+
+ /* Vector not initiated set table to default values */
+ if (unlikely(cl_read_validate_vector_bitmap(cl_hw, bitmap))) {
+ cl_dbg_trace(cl_hw, "initiate to default values\n");
+ return;
+ }
+
+ /* Perform only on calibrated boards - cl_read_validate_vector_bitmap succeeded (0) */
+ for (*ch_idx = 0; *ch_idx < cl_channel_num(cl_hw); (*ch_idx)++) {
+ for (ant = 0; ant < MAX_ANTENNAS; ant++) {
+ if (!(cl_hw->mask_num_antennas & BIT(ant)))
+ continue;
+
+ /*
+ * In old eeprom versions (< 3) power info was saved in eeprom
+ * per riu chain (unintentionally) so we need to fetch it accordingly
+ */
+ if (eeprom_version < 3)
+ *phy = cl_hw_ant_to_riu_chain(cl_hw, ant);
+ else
+ *phy = ant;
+
+ ret = cl_point_idx_to_address(cl_hw, bitmap, &curr_point);
+
+ if (ret) {
+ /* Don't overwrite default values */
+ cl_dbg_err(cl_hw, "point idx to address failed\n");
+ continue;
+ }
+
+ if (cl_band_is_6g(cl_hw)) {
+ tmp_freq = cl_channel_idx_to_freq(cl_hw, *ch_idx);
+ curr_point.chan = ieee80211_frequency_to_channel(tmp_freq);
+ }
+
+ ret = cl_read_or_interpolate_point(cl_hw, bitmap, &curr_point);
+ /* Unable to calculate new value ==> DON'T overwrite default values */
+ if (unlikely(ret))
+ continue;
+
+ cl_calib_phy_offset_adjust(cl_hw, eeprom_version, *phy, &curr_point);
+
+ if (cl_band_is_6g(cl_hw))
+ ext_chan_idx = CHAN_BITMAP_IDX_6G_2_EXT_IDX(*ch_idx);
+ else
+ ext_chan_idx = *ch_idx;
+
+ cl_calib_fill_power_info(cl_hw, ext_chan_idx, ant, &curr_point);
+ }
+ }
+
+ if (!cl_band_is_6g(cl_hw))
+ goto calib_read_out;
+
+ /*
+ * Fill info for channels that are missing from the original bitmap, which are the
+ * center channels in 40/180/160MHz (channels 3, 7, 11, etc..).
+ */
+ for (ext_chan_idx = 0; ext_chan_idx < NUM_EXT_CHANNELS_6G; ext_chan_idx++) {
+ /* Skip channels that were already filled above */
+ tmp_freq = cl_channel_ext_idx_to_freq_6g(cl_hw, ext_chan_idx);
+
+ /* Chan field needs to be updated before calling to cl_read_or_interpolate_point */
+ curr_point.chan = ieee80211_frequency_to_channel(tmp_freq);
+
+ /* If the channel is found in the bitmap - we already handled it above */
+ if (cl_channel_to_bitmap_index(cl_hw, curr_point.chan) != INVALID_CHAN_IDX)
+ continue;
+
+ for (ant = 0; ant < MAX_ANTENNAS; ant++) {
+ if (!(cl_hw->mask_num_antennas & BIT(ant)))
+ continue;
+
+ /*
+ * In old eeprom versions (< 3) power info was saved in eeprom
+ * per riu chain (unintentionally) so we need to fetch it accordingly
+ */
+ if (eeprom_version < 3)
+ *phy = cl_hw_ant_to_riu_chain(cl_hw, ant);
+ else
+ *phy = ant;
+
+ /*
+ * Addr and idx fields needs to be invalid to successfully interpolate the
+ * power info on the extended eeprom bitmap.
+ */
+ curr_point.addr = INVALID_ADDR;
+ curr_point.idx = INVALID_CHAN_IDX;
+
+ ret = cl_read_or_interpolate_point(cl_hw, bitmap, &curr_point);
+
+ /* Unable to calculate new value ==> DON'T overwrite default values */
+ if (unlikely(ret))
+ continue;
+
+ cl_calib_fill_power_info(cl_hw, ext_chan_idx, ant, &curr_point);
+ }
+ }
+calib_read_out:
+ cl_dbg_trace(cl_hw, "Created tx_pow_info\n");
+}
+
+void cl_calib_power_offset_fill(struct cl_hw *cl_hw, u8 channel,
+ u8 bw, u8 offset[MAX_ANTENNAS])
+{
+ u8 i, chain;
+ u8 chan_idx = cl_channel_to_index(cl_hw, cl_hw->channel);
+ s8 pow_offset;
+ s8 signed_offset;
+
+ if (chan_idx == INVALID_CHAN_IDX)
+ return;
+
+ for (i = 0; i < MAX_ANTENNAS; i++) {
+ if (!(cl_hw->mask_num_antennas & BIT(i)))
+ continue;
+
+ pow_offset = cl_hw->tx_pow_info[chan_idx][i].offset;
+ signed_offset = cl_power_offset_check_margin(cl_hw, bw, i, pow_offset);
+ chain = cl_hw_ant_to_riu_chain(cl_hw, i);
+ offset[chain] = cl_convert_signed_to_reg_value(signed_offset);
+ }
+}
+
+struct cl_runtime_work {
+ struct work_struct ws;
+ struct cl_hw *cl_hw;
+ u32 channel;
+ u8 bw;
+ u16 primary;
+ u16 center;
+};
+
+static int _cl_calib_runtime_and_switch_channel(struct cl_hw *cl_hw, u32 channel, u8 bw,
+ u16 primary, u16 center,
+ struct cl_calib_params calib_params)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_calib_iq_restore iq_restore_other_tcv;
+ struct cl_hw *cl_hw_other = cl_hw_other_tcv(cl_hw);
+ int ret = 0;
+ u8 fem_mode = cl_hw->fem_mode;
+ u8 save_other_tcv_needed = cl_chip_is_both_enabled(chip) && cl_hw_other &&
+ !!cl_hw_other->primary_freq;
+
+ cl_hw->calib_runtime_needed = false;
+
+ if (save_other_tcv_needed)
+ cl_calib_save_channel(cl_hw_other, &iq_restore_other_tcv);
+
+ if (cl_chip_is_both_enabled(chip)) {
+ ret = cl_calib_iq_set_idle(cl_hw, true);
+ if (ret)
+ return ret;
+ }
+
+ cl_fem_set_iq_bypass(cl_hw);
+ cl_afe_cfg_calib(chip);
+
+ /* Calibration by the default values */
+ if (cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center, calib_params)) {
+ cl_dbg_chip_err(cl_hw, "Failed to calibrate channel %u, bw %u\n",
+ channel, bw);
+ ret = -1;
+ }
+
+ cl_fem_iq_restore(cl_hw, fem_mode);
+ cl_afe_cfg_restore(chip);
+
+ if (save_other_tcv_needed)
+ cl_calib_restore_channel(cl_hw_other, &iq_restore_other_tcv);
+
+ /* Set channel to load the new calib data */
+ ret += cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center,
+ CL_CALIB_PARAMS_DEFAULT_STRUCT);
+
+ if (cl_chip_is_both_enabled(chip))
+ cl_calib_iq_set_idle(cl_hw, false);
+
+ return ret;
+}
+
+static void cl_calib_runtime_work_handler(struct work_struct *ws)
+{
+ struct cl_runtime_work *runtime_work = container_of(ws, struct cl_runtime_work, ws);
+
+ cl_calib_runtime_and_switch_channel(runtime_work->cl_hw, runtime_work->channel,
+ runtime_work->bw, runtime_work->primary,
+ runtime_work->center);
+
+ kfree(runtime_work);
+}
+
+static bool cl_calib_runtime_is_channel_calibrated(struct cl_hw *cl_hw, u8 channel, u8 bw)
+{
+ int chain, lna;
+ u8 chan_idx = cl_calib_dcoc_channel_bw_to_idx(cl_hw, channel, bw);
+ u8 tcv_idx = cl_hw->tcv_idx;
+ u8 sx = cl_hw->sx_idx;
+
+ riu_chain_for_each(chain) {
+ if (!cl_hw->chip->calib_db.iq_tx[tcv_idx][chan_idx][bw][sx][chain].gain &&
+ !cl_hw->chip->calib_db.iq_tx[tcv_idx][chan_idx][bw][sx][chain].coef0 &&
+ !cl_hw->chip->calib_db.iq_tx[tcv_idx][chan_idx][bw][sx][chain].coef1 &&
+ !cl_hw->chip->calib_db.iq_tx[tcv_idx][chan_idx][bw][sx][chain].coef2) {
+ cl_dbg_trace(cl_hw, "IQ TX calibration data is missing\n");
+ return false;
+ }
+
+ if (!cl_hw->chip->calib_db.iq_rx[tcv_idx][chan_idx][bw][sx][chain].gain &&
+ !cl_hw->chip->calib_db.iq_rx[tcv_idx][chan_idx][bw][sx][chain].coef0 &&
+ !cl_hw->chip->calib_db.iq_rx[tcv_idx][chan_idx][bw][sx][chain].coef1 &&
+ !cl_hw->chip->calib_db.iq_rx[tcv_idx][chan_idx][bw][sx][chain].coef2) {
+ cl_dbg_trace(cl_hw, "IQ RX calibration data is missing\n");
+ return false;
+ }
+
+ if (!cl_hw->chip->calib_db.iq_tx_lolc[tcv_idx][chan_idx][bw][sx][chain]) {
+ cl_dbg_trace(cl_hw, "LOLC calibration data is missing\n");
+ return false;
+ }
+ }
+
+ for (lna = 0; lna < DCOC_LNA_GAIN_NUM; lna++) {
+ riu_chain_for_each(chain) {
+ if (!cl_hw->chip->calib_db.dcoc[tcv_idx][chan_idx][bw][sx][chain][lna].i &&
+ !cl_hw->chip->calib_db.dcoc[tcv_idx][chan_idx][bw][sx][chain][lna].q) {
+ cl_dbg_trace(cl_hw, "DCOC calibration data is missing\n");
+ return false;
+ }
+ }
+ }
+
+ /* If all the calibration data of channel exist */
+ return true;
+}
+
+bool cl_calib_runtime_is_allowed(struct cl_hw *cl_hw)
+{
+ if (!cl_hw)
+ return false;
+
+ if (cl_hw->scanner && cl_is_scan_in_progress(cl_hw->scanner))
+ return false;
+
+ return true;
+}
+
+void cl_calib_runtime_work(struct cl_hw *cl_hw, u32 channel, u8 bw, u16 primary,
+ u16 center)
+{
+ struct cl_runtime_work *runtime_work = kzalloc(sizeof(*runtime_work), GFP_ATOMIC);
+
+ if (!runtime_work)
+ return;
+
+ runtime_work->cl_hw = cl_hw;
+ runtime_work->channel = channel;
+ runtime_work->bw = bw;
+ runtime_work->primary = primary;
+ runtime_work->center = center;
+ INIT_WORK(&runtime_work->ws, cl_calib_runtime_work_handler);
+ queue_work(cl_hw->drv_workqueue, (struct work_struct *)(&runtime_work->ws));
+}
+
+int cl_calib_runtime_and_switch_channel(struct cl_hw *cl_hw, u32 channel, u8 bw, u32 primary,
+ u32 center)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ struct cl_hw *cl_hw_other = cl_hw_other_tcv(cl_hw);
+ struct cl_calib_params calib_params = {SET_CHANNEL_MODE_CALIB, false, SX_FREQ_OFFSET_Q2, 0};
+ int ret = 0;
+ bool calib_needed = (cl_hw->chip->conf->ci_calib_runtime_force ||
+ !cl_calib_runtime_is_channel_calibrated(cl_hw, channel, bw)) &&
+ !cl_hw->sw_scan_in_progress;
+
+ mutex_lock(&cl_hw->chip->calib_runtime_mutex);
+
+ if (!calib_needed || !cl_calib_runtime_is_allowed(cl_hw) ||
+ (cl_chip_is_both_enabled(chip) && !cl_calib_runtime_is_allowed(cl_hw_other))) {
+ if (calib_needed)
+ cl_hw->calib_runtime_needed = true;
+
+ /* Switch channel without calibration */
+ ret = cl_msg_tx_set_channel(cl_hw, channel, bw, primary, center,
+ CL_CALIB_PARAMS_DEFAULT_STRUCT);
+ mutex_unlock(&cl_hw->chip->calib_runtime_mutex);
+
+ return ret;
+ }
+
+ /* Convert ant to riu chain in the calib plan_bitmap */
+ calib_params.plan_bitmap = cl_hw_ant_mask_to_riu_chain_mask(cl_hw,
+ cl_hw->mask_num_antennas);
+
+ /* This mutex needs to be held during the whole calibration process */
+ mutex_lock(&cl_hw->chip->set_idle_mutex);
+ ret = _cl_calib_runtime_and_switch_channel(cl_hw, channel, bw, primary, center,
+ calib_params);
+ mutex_unlock(&cl_hw->chip->set_idle_mutex);
+
+ mutex_unlock(&cl_hw->chip->calib_runtime_mutex);
+
+ return ret;
+}
+
--
2.36.1


2022-05-25 09:02:38

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 61/96] cl8k: add recovery.h

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/recovery.h | 39 +++++++++++++++++++++
1 file changed, 39 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/recovery.h

diff --git a/drivers/net/wireless/celeno/cl8k/recovery.h b/drivers/net/wireless/celeno/cl8k/recovery.h
new file mode 100644
index 000000000000..303259d5d802
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/recovery.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#ifndef CL_RECOVERY_H
+#define CL_RECOVERY_H
+
+#include <linux/types.h>
+
+enum recovery_reason {
+ RECOVERY_WAIT4CFM,
+ RECOVERY_UNRECOVERABLE_ASSERT,
+ RECOVERY_UNRECOVERABLE_ASSERT_NO_DUMP,
+ RECOVERY_ASSERT_STORM_DETECT,
+ RECOVERY_DRV_FAILED,
+};
+
+enum cl_fw_wd_mode {
+ FW_WD_DISABLE,
+ FW_WD_INTERNAL_RECOVERY,
+ FW_WD_DRV_RELOAD,
+};
+
+struct cl_recovery_db {
+ unsigned long last_restart;
+ u32 restart_cnt;
+
+ u32 ela_en;
+ u32 ela_sel_a;
+ u32 ela_sel_b;
+ u32 ela_sel_c;
+
+ bool in_recovery;
+};
+
+bool cl_recovery_in_progress(struct cl_hw *cl_hw);
+void cl_recovery_reconfig_complete(struct cl_hw *cl_hw);
+void cl_recovery_start(struct cl_hw *cl_hw, int reason);
+
+#endif /* CL_RECOVERY_H */
--
2.36.1


2022-05-25 16:13:08

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 17/96] cl8k: add debug.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/debug.c | 442 +++++++++++++++++++++++
1 file changed, 442 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/debug.c

diff --git a/drivers/net/wireless/celeno/cl8k/debug.c b/drivers/net/wireless/celeno/cl8k/debug.c
new file mode 100644
index 000000000000..f8a438747ac3
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/debug.c
@@ -0,0 +1,442 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/string.h>
+#include <linux/list.h>
+#include <linux/ctype.h>
+
+#include "chip.h"
+#include "hw.h"
+#include "utils.h"
+#include "debug.h"
+
+const char *cl_dbgfile_get_msg_txt(struct cl_dbg_data *dbg_data, u16 file_id, u16 line)
+{
+ /* Get the message text from the .dbg file by file_id & line number */
+ int remaining_bytes = dbg_data->size;
+ const char *str = dbg_data->str;
+ char id_str[32];
+ int idstr_len;
+
+ if (!str || 0 == remaining_bytes)
+ return NULL;
+
+ idstr_len = snprintf(id_str, sizeof(id_str), "%u:%u:", file_id, line);
+
+ /* Skip hash */
+ while (*str++ != '\n')
+ ;
+
+ remaining_bytes -= (str - (char *)dbg_data->str);
+
+ while (remaining_bytes > 0) {
+ if (strncmp(id_str, str, idstr_len) == 0) {
+ str += idstr_len;
+ while (*str == ' ')
+ ++str;
+ return (const char *)str;
+ }
+
+ str += strnlen(str, 512) + 1;
+ remaining_bytes = dbg_data->size - (str - (char *)dbg_data->str);
+ }
+
+ /* No match found */
+ pr_err("error: file_id=%d line=%d not found in debug print file\n", file_id, line);
+ return NULL;
+}
+
+void cl_dbgfile_parse(struct cl_hw *cl_hw, void *edata, u32 esize)
+{
+ /* Parse & store the firmware debug file */
+ struct cl_dbg_data *dbg_data = &cl_hw->dbg_data;
+
+ dbg_data->size = esize;
+ dbg_data->str = edata;
+}
+
+void cl_dbgfile_release_mem(struct cl_dbg_data *dbg_data,
+ struct cl_str_offload_env *str_offload_env)
+{
+ dbg_data->str = NULL;
+
+ str_offload_env->enabled = false;
+ str_offload_env->block1 = NULL;
+ str_offload_env->block2 = NULL;
+}
+
+/*
+ * Store debug print offload data
+ * - part 1: offloaded block that does not exist on target
+ * - part 2: resident block that remains on target [optional]
+ */
+int cl_dbgfile_store_offload_data(struct cl_chip *chip, struct cl_hw *cl_hw,
+ void *data1, u32 size1, u32 base1,
+ void *data2, u32 size2, u32 base2,
+ void *data3, u32 size3, u32 base3)
+{
+ u32 u = size1;
+ struct cl_str_offload_env *str_offload_env = &cl_hw->str_offload_env;
+
+ if (u > 200000)
+ goto err_too_big;
+
+ /* TODO we modify offload data! if caller checks integrity, make a copy? */
+ str_offload_env->block1 = data1;
+ str_offload_env->size1 = size1;
+ str_offload_env->base1 = base1;
+
+ str_offload_env->block2 = data2;
+ str_offload_env->size2 = size2;
+ str_offload_env->base2 = base2;
+
+ str_offload_env->block3 = data3;
+ str_offload_env->size3 = size3;
+ str_offload_env->base3 = base3;
+
+ str_offload_env->enabled = true;
+
+ cl_dbg_info(cl_hw, "%cmac%u: FW prints offload memory use = %uK\n",
+ cl_hw->fw_prefix, chip->idx, (size1 + size2 + 1023) / 1024);
+
+ return 0;
+
+err_too_big:
+ pr_err("%s: size too big: %u\n", __func__, u);
+ return 1;
+}
+
+static void cl_fw_do_print_n(struct cl_hw *cl_hw, const char *str, int n)
+{
+ /* Print formatted string with "band" prefix */
+ if (n < 0 || n > 256) {
+ cl_dbg_err(cl_hw, "%cmac%u: *** FW PRINT - BAD SIZE: %d\n",
+ cl_hw->fw_prefix, cl_hw->chip->idx, n);
+ return;
+ }
+
+ cl_dbg_verbose(cl_hw, "%cmac%u: %.*s\n", cl_hw->fw_prefix, cl_hw->chip->idx, n, str);
+}
+
+static void cl_fw_do_hex_dump_bytes(struct cl_hw *cl_hw, u32 addr, void *data, u32 count)
+{
+ cl_dbg_verbose(cl_hw, "%cmac%u: hex dump:\n", cl_hw->fw_prefix, cl_hw->chip->idx);
+ cl_hex_dump(NULL, data, count, addr, true);
+}
+
+#define MAGIC_PRINT_OFFLOAD 0xFA /* 1st (low) byte of signature */
+/* 2nd signature byte */
+#define MAGIC_PRINT_OFF_XDUMP 0xD0 /* Hex dump, by bytes */
+#define MAGIC_PRINT_OFF_LIT 0x01 /* Literal/preformatted string */
+#define MAGIC_PRINT_OFF_PRINT 0x02 /* Print with 'virtual' format string */
+
+static int cl_fw_offload_print(struct cl_str_offload_env *str_offload_env,
+ char *fmt, const char *params)
+{
+ static char buf[1024] = {0};
+ const char *cur_prm = params;
+ char tmp;
+ char *fmt_end = fmt;
+ size_t size = sizeof(int);
+ int len = 0;
+
+ union v {
+ u32 val32;
+ u64 val64;
+ ptrdiff_t str;
+ } v;
+
+ while ((fmt_end = strchr(fmt_end, '%'))) {
+ fmt_end++;
+
+ /* Skip '%%'. */
+ if (*fmt_end == '%') {
+ fmt_end++;
+ continue;
+ }
+
+ /* Skip flags. */
+ while (strchr("-+ 0#", *fmt_end))
+ fmt_end++;
+
+ /* Skip width. */
+ while (isdigit(*fmt_end))
+ fmt_end++;
+
+ /* Skip precision. */
+ if (*fmt_end == '.') {
+ while (*fmt_end == '-' || *fmt_end == '+')
+ fmt_end++;
+
+ while (isdigit(*fmt_end))
+ fmt_end++;
+ }
+
+ /* Get size. */
+ if (*fmt_end == 'z') {
+ /* Remove 'z' from %zu, %zd, %zx and %zX,
+ * because sizeof(size_t) == 4 in the firmware.
+ * 'z' can only appear in front of 'd', 'u', 'x' or 'X'.
+ */
+ if (!strchr("duxX", *(fmt_end + 1)))
+ return -1;
+
+ fmt_end++;
+ size = 4;
+ } else if (*fmt_end == 'l') {
+ fmt_end++;
+
+ if (*fmt_end == 'l') {
+ fmt_end++;
+ size = sizeof(long long);
+ } else {
+ size = sizeof(long);
+ }
+
+ if (*fmt_end == 'p') /* %p can't get 'l' or 'll' modifiers. */
+ return -1;
+ } else {
+ size = 4;
+ }
+
+ /* Get parameter. */
+ switch (*fmt_end) {
+ case 'p': /* Replace %p with %x, because the firmware's pointers are 32 bit wide */
+ *fmt_end = 'x';
+ fallthrough;
+ case 'd':
+ case 'u':
+ case 'x':
+ case 'X':
+ if (size == 4)
+ v.val32 = __le32_to_cpu(*(__le32 *)cur_prm);
+ else
+ v.val64 = __le64_to_cpu(*(__le64 *)cur_prm);
+ cur_prm += size;
+ break;
+ case 's':
+ v.str = __le32_to_cpu(*(__le32 *)cur_prm);
+ cur_prm += 4;
+ size = sizeof(ptrdiff_t);
+
+ if (v.str >= str_offload_env->base3 &&
+ v.str < str_offload_env->base3 + str_offload_env->size3) {
+ v.str -= str_offload_env->base3;
+ v.str += (ptrdiff_t)str_offload_env->block3;
+ } else if (v.str >= str_offload_env->base2 &&
+ v.str < str_offload_env->base2 + str_offload_env->size2) {
+ v.str -= str_offload_env->base2;
+ v.str += (ptrdiff_t)str_offload_env->block2;
+ } else {
+ return -1;
+ }
+
+ break;
+ default:
+ return -1;
+ }
+
+ /* Print into buffer. */
+ fmt_end++;
+ tmp = *fmt_end; /* Truncate the format to the current point and then restore. */
+ *fmt_end = 0;
+ len += snprintf(buf + len, sizeof(buf) - len, fmt, size == 4 ? v.val32 : v.val64);
+ *fmt_end = tmp;
+ fmt = fmt_end;
+ }
+
+ snprintf(buf + len, sizeof(buf) - len, "%s", fmt);
+
+ pr_debug("%s", buf);
+
+ return 0;
+}
+
+struct cl_pr_off_desc {
+ u8 file_id;
+ u8 flag;
+ __le16 line_num;
+ char fmt[];
+};
+
+char *cl_fw_print_normalize(char *s, char old, char new)
+{
+ for (; *s; ++s)
+ if (*s == old)
+ *s = new;
+ return s;
+}
+
+static int cl_fw_do_dprint(struct cl_hw *cl_hw, u32 fmtaddr, u32 nparams, u32 *params)
+{
+ /*
+ * fmtaddr - virtual address of format descriptor in firmware,
+ * must be in the offloaded segment
+ * nparams - size of parameters array in u32; min=0, max=MAX_PRINT_OFF_PARAMS
+ * params - array of parameters[nparams]
+ */
+ struct cl_str_offload_env *str_offload_env = &cl_hw->str_offload_env;
+ struct cl_pr_off_desc *pfmt = NULL;
+
+ if (!str_offload_env->enabled)
+ return -1;
+
+ if (fmtaddr & 0x3)
+ cl_dbg_warn(cl_hw, "FW PRINT - format not aligned on 4? %8.8X\n", fmtaddr);
+
+ if (fmtaddr > str_offload_env->base1 &&
+ fmtaddr < (str_offload_env->base1 + str_offload_env->size1)) {
+ pfmt = (void *)((fmtaddr - str_offload_env->base1) + str_offload_env->block1);
+ } else {
+ cl_dbg_err(cl_hw, "FW PRINT - format not in allowed area %8.8X\n", fmtaddr);
+ return -1;
+ }
+
+ /*
+ * Current string sent by firmware is #mac@ where # is '253' and @ is '254'
+ * Replace '253' with 'l' or 's' according to the fw_prefix.
+ * Replace '254' with '0' or '1' according to chip index.
+ */
+ cl_fw_print_normalize(pfmt->fmt, (char)253, cl_hw->fw_prefix);
+ cl_fw_print_normalize(pfmt->fmt, (char)254, (cl_hw->chip->idx == CHIP0) ? '0' : '1');
+
+ if (cl_fw_offload_print(str_offload_env, pfmt->fmt, (char *)params) < 0) {
+ cl_dbg_err(cl_hw, "FW PRINT - ERROR in format! (file %u:%u)\n",
+ pfmt->file_id, pfmt->line_num);
+ cl_hex_dump(NULL, (void *)pfmt, 48, fmtaddr, true); /* $$$ dbg dump the struct */
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int cl_fw_do_offload(struct cl_hw *cl_hw, u8 *data, int bytes_remaining)
+{
+ u8 magic2 = data[1];
+ u32 nb = data[2] + (data[3] << 8); /* Following size in bytes */
+ /* DATA IS UNALIGNED! REVISE if alignment required or BIG ENDIAN! */
+ __le32 *dp = (__le32 *)data;
+ int bytes_consumed = 4; /* 1 + 1 + 2 */
+
+ /* Data: [0] u8 magic1, u8 magic2, u16 following size in bytes */
+ if (bytes_remaining < 8) {
+ cl_dbg_err(cl_hw, "*** FW PRINT - OFFLOAD PACKET TOO SHORT: %d\n",
+ bytes_remaining);
+ return bytes_remaining;
+ }
+
+ if (bytes_remaining < (nb + bytes_consumed)) {
+ cl_dbg_err(cl_hw, "*** FW PRINT - OFFLOAD PACKET %u > remainder %d??\n",
+ nb, bytes_remaining);
+ return bytes_remaining;
+ }
+
+ switch (magic2) {
+ case MAGIC_PRINT_OFF_PRINT: {
+ /*
+ * [1] u32 format descriptor ptr
+ * [2] u32[] parameters
+ */
+ u32 fmtp = le32_to_cpu(dp[1]);
+ u32 np = (nb - 4) / 4; /* Number of printf parameters */
+
+ if (nb < 4 || nb & 3) {
+ cl_dbg_err(cl_hw, "*** FW PRINT - bad pkt size: %u\n", nb);
+ goto err;
+ }
+
+ cl_fw_do_dprint(cl_hw, fmtp, np, (u32 *)&dp[2]);
+
+ bytes_consumed += nb; /* Already padded to 4 bytes */
+ }
+ break;
+
+ case MAGIC_PRINT_OFF_LIT: {
+ /* [1] Remaining bytes: literal string */
+ cl_fw_do_print_n(cl_hw, (char *)&dp[1], nb);
+ bytes_consumed += ((nb + 3) / 4) * 4; /* Padding to 4 bytes */
+ }
+ break;
+
+ case MAGIC_PRINT_OFF_XDUMP:
+ /* [1] bytes[nb] */
+ if (nb >= 1)
+ cl_fw_do_hex_dump_bytes(cl_hw, 0, &dp[1], nb);
+
+ bytes_consumed += ((nb + 3) / 4) * 4; /* Padding to 4 bytes */
+ break;
+
+ default:
+ cl_dbg_err(cl_hw, "*** FW PRINT - BAD TYPE: %4.4X\n", magic2);
+ goto err;
+ }
+
+ return bytes_consumed;
+
+err:
+ return bytes_remaining; /* Skip all */
+}
+
+void cl_dbgfile_print_fw_str(struct cl_hw *cl_hw, u8 *str, int max_size)
+{
+ /* Handler for firmware debug prints */
+ int bytes_remaining = max_size;
+ int i;
+ u8 delim = 0;
+
+ while (bytes_remaining > 0) {
+ /* Scan for normal print data: */
+ for (i = 0; i < bytes_remaining; i++) {
+ if (str[i] < ' ' || str[i] >= 0x7F) {
+ if (str[i] == '\t')
+ continue;
+ delim = str[i];
+ break;
+ }
+ }
+
+ if (i > 0) {
+ if (delim == '\n') {
+ bytes_remaining -= i + 1;
+ cl_fw_do_print_n(cl_hw, str, i);
+ str += i + 1;
+ continue;
+ }
+
+ if (delim != MAGIC_PRINT_OFFLOAD) {
+ cl_fw_do_print_n(cl_hw, str, i);
+ bytes_remaining -= i;
+ return; /* Better stop parsing this */
+ }
+ /* Found offload packet but previous string not terminated: */
+ cl_fw_do_print_n(cl_hw, str, i);
+ cl_dbg_err(cl_hw, "*** FW PRINT - NO LINE END2\n");
+ bytes_remaining -= i;
+ str += i;
+ continue;
+ }
+
+ /* Delimiter at offset 0 */
+ switch (delim) {
+ case '\n':
+ cl_fw_do_print_n(cl_hw, " ", 1); /* Print empty line */
+ str++;
+ bytes_remaining--;
+ continue;
+ case 0:
+ return;
+ case MAGIC_PRINT_OFFLOAD:
+ i = cl_fw_do_offload(cl_hw, str, bytes_remaining);
+ bytes_remaining -= i;
+ str += i;
+ break;
+ default:
+ cl_dbg_err(cl_hw, "*** FW PRINT - BAD BYTE=%2.2X ! rem=%d\n",
+ delim, bytes_remaining);
+ return; /* Better stop parsing this */
+ }
+ }
+}
--
2.36.1


2022-05-25 18:39:10

by Viktor Barna

[permalink] [raw]
Subject: [RFC v2 56/96] cl8k: add radio.c

From: Viktor Barna <[email protected]>

(Part of the split. Please, take a look at the cover letter for more
details).

Signed-off-by: Viktor Barna <[email protected]>
---
drivers/net/wireless/celeno/cl8k/radio.c | 1113 ++++++++++++++++++++++
1 file changed, 1113 insertions(+)
create mode 100644 drivers/net/wireless/celeno/cl8k/radio.c

diff --git a/drivers/net/wireless/celeno/cl8k/radio.c b/drivers/net/wireless/celeno/cl8k/radio.c
new file mode 100644
index 000000000000..416c2bdff07e
--- /dev/null
+++ b/drivers/net/wireless/celeno/cl8k/radio.c
@@ -0,0 +1,1113 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
+
+#include <linux/kthread.h>
+#include <linux/jiffies.h>
+#include <linux/list.h>
+
+#include "vif.h"
+#include "sta.h"
+#include "chip.h"
+#include "debug.h"
+#include "hw.h"
+#include "phy.h"
+#include "utils.h"
+#include "reg/reg_access.h"
+#include "reg/reg_defs.h"
+#include "mac_addr.h"
+#include "stats.h"
+#include "radio.h"
+
+static int cl_radio_off_kthread(void *arg)
+{
+ struct cl_hw *cl_hw = (struct cl_hw *)arg;
+ struct cl_vif *cl_vif;
+
+ cl_dbg_verbose(cl_hw, "Waiting for stations to disconnect\n");
+
+ if (wait_event_timeout(cl_hw->radio_wait_queue,
+ cl_sta_num_bh(cl_hw) == 0,
+ msecs_to_jiffies(5000)) == 0) {
+ cl_dbg_verbose(cl_hw,
+ "Failed to disconnect stations. %u stations still remaining\n",
+ cl_sta_num_bh(cl_hw));
+ }
+
+ cl_dbg_trace(cl_hw, "Stopping queues ...\n");
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list) {
+ cl_vif->tx_en = false;
+ cl_dbg_verbose(cl_hw, "Radio OFF vif_index = %u\n",
+ cl_vif->vif_index);
+ }
+ read_unlock_bh(&cl_hw->vif_db.lock);
+
+ cl_msg_tx_set_idle(cl_hw, MAC_IDLE_SYNC, true);
+
+ cl_dbg_trace(cl_hw, "Radio shut down successfully\n");
+
+ cl_hw->radio_status = RADIO_STATUS_OFF;
+ atomic_set(&cl_hw->radio_lock, 0);
+
+ return 0;
+}
+
+static int cl_radio_off(struct cl_hw *cl_hw)
+{
+ struct task_struct *k;
+
+ if (cl_hw->radio_status != RADIO_STATUS_ON ||
+ atomic_xchg(&cl_hw->radio_lock, 1))
+ return 1;
+
+ cl_hw->radio_status = RADIO_STATUS_GOING_DOWN;
+
+ /* Relegate the job to a kthread to free the system call. */
+ k = kthread_run(cl_radio_off_kthread, cl_hw, "cl_radio_off_kthread");
+ if (IS_ERR(k))
+ cl_dbg_verbose(cl_hw,
+ "Error: failed to run cl_radio_off_kthread\n");
+ return 0;
+}
+
+void cl_radio_on_start(struct cl_hw *cl_hw)
+{
+ struct cl_vif *cl_vif;
+
+ if (cl_calib_common_check_errors(cl_hw) != 0 || !cl_hw->conf->ce_num_antennas)
+ return;
+
+ read_lock_bh(&cl_hw->vif_db.lock);
+ list_for_each_entry(cl_vif, &cl_hw->vif_db.head, list) {
+ if (cl_vif->vif->type == NL80211_IFTYPE_AP) {
+ if (cl_hw_get_iface_conf(cl_hw) == CL_IFCONF_REPEATER &&
+ !test_bit(CL_DEV_REPEATER, &cl_hw->drv_flags))
+ continue;
+ if (cl_hw_get_iface_conf(cl_hw) == CL_IFCONF_MESH_AP &&
+ !test_bit(CL_DEV_MESH_AP, &cl_hw->drv_flags))
+ continue;
+ }
+
+ if (!cl_hw->conf->ce_listener_en)
+ cl_vif->tx_en = true;
+
+ cl_dbg_verbose(cl_hw, "Radio ON vif=%u\n", cl_vif->vif_index);
+ }
+ read_unlock_bh(&cl_hw->vif_db.lock);
+
+ cl_msg_tx_set_idle(cl_hw, MAC_ACTIVE, true);
+
+ cl_dbg_verbose(cl_hw, "Radio has been started\n");
+
+ cl_hw->radio_status = RADIO_STATUS_ON;
+ atomic_set(&cl_hw->radio_lock, 0);
+}
+
+int cl_radio_on(struct cl_hw *cl_hw)
+{
+ struct cl_hw *cl_hw_other = cl_hw_other_tcv(cl_hw);
+ bool both_enabled = cl_chip_is_both_enabled(cl_hw->chip);
+
+ if (cl_hw->radio_status != RADIO_STATUS_OFF ||
+ atomic_xchg(&cl_hw->radio_lock, 1))
+ return 1;
+
+ if (!both_enabled ||
+ (both_enabled && cl_hw_other && !cl_hw_other->conf->ce_radio_on) ||
+ (cl_hw_is_tcv1(cl_hw) && cl_hw->chip->conf->ci_tcv1_chains_sx0)) {
+ if (cl_calib_iq_calibration_needed(cl_hw)) {
+ cl_calib_common_start_work(cl_hw);
+
+ return 0;
+ }
+ } else if (cl_hw_other) {
+ if (cl_hw_other->iq_cal_ready) {
+ cl_hw_other->iq_cal_ready = false;
+ cl_calib_common_start_work(cl_hw);
+
+ return 0;
+ } else if (cl_calib_iq_calibration_needed(cl_hw)) {
+ cl_hw->iq_cal_ready = true;
+ cl_dbg_verbose(cl_hw, "IQ Calibration needed. Wait for both TCVs "
+ "to get to radio-on before turning both on.\n");
+ return 1;
+ }
+ }
+
+ cl_radio_on_start(cl_hw);
+
+ return 0;
+}
+
+void cl_radio_off_chip(struct cl_chip *chip)
+{
+ if (cl_chip_is_tcv0_enabled(chip))
+ cl_radio_off(chip->cl_hw_tcv0);
+
+ if (cl_chip_is_tcv1_enabled(chip))
+ cl_radio_off(chip->cl_hw_tcv1);
+}
+
+void cl_radio_on_chip(struct cl_chip *chip)
+{
+ if (cl_chip_is_tcv0_enabled(chip))
+ cl_radio_on(chip->cl_hw_tcv0);
+
+ if (cl_chip_is_tcv1_enabled(chip))
+ cl_radio_on(chip->cl_hw_tcv1);
+}
+
+bool cl_radio_is_off(struct cl_hw *cl_hw)
+{
+ return cl_hw->radio_status == RADIO_STATUS_OFF;
+}
+
+bool cl_radio_is_on(struct cl_hw *cl_hw)
+{
+ return cl_hw->radio_status == RADIO_STATUS_ON;
+}
+
+bool cl_radio_is_going_down(struct cl_hw *cl_hw)
+{
+ return cl_hw->radio_status == RADIO_STATUS_GOING_DOWN;
+}
+
+void cl_radio_off_wake_up(struct cl_hw *cl_hw)
+{
+ wake_up(&cl_hw->radio_wait_queue);
+}
+
+static void cl_clk_init(struct cl_chip *chip)
+{
+ cmu_clk_en_set(chip, CMU_MAC_ALL_CLK_EN);
+
+ if (cl_chip_is_both_enabled(chip)) {
+ cmu_phy_0_clk_en_set(chip, CMU_PHY_0_APB_CLK_EN_BIT | CMU_PHY_0_MAIN_CLK_EN_BIT);
+ cmu_phy_1_clk_en_set(chip, CMU_PHY_1_APB_CLK_EN_BIT | CMU_PHY_1_MAIN_CLK_EN_BIT);
+
+ cmu_phy_0_rst_ceva_0_global_rst_n_setf(chip, 0);
+ modem_gcu_ceva_ctrl_external_wait_setf(chip->cl_hw_tcv0, 1);
+ cmu_phy_0_rst_ceva_0_global_rst_n_setf(chip, 1);
+
+ cmu_phy_1_rst_ceva_1_global_rst_n_setf(chip, 0);
+ modem_gcu_ceva_ctrl_external_wait_setf(chip->cl_hw_tcv1, 1);
+ cmu_phy_1_rst_ceva_1_global_rst_n_setf(chip, 1);
+
+ cmu_phy_0_clk_en_ceva_0_clk_en_setf(chip, 1);
+ cmu_phy_1_clk_en_ceva_1_clk_en_setf(chip, 1);
+ } else if (cl_chip_is_tcv0_enabled(chip)) {
+ /* Even if only PHY0 is enabled we need to set CMU_PHY_1_MAIN_CLK_EN_BIT */
+ cmu_phy_0_clk_en_set(chip, CMU_PHY_0_APB_CLK_EN_BIT | CMU_PHY_0_MAIN_CLK_EN_BIT);
+ cmu_phy_1_clk_en_set(chip, CMU_PHY_1_MAIN_CLK_EN_BIT);
+
+ cmu_phy_0_rst_ceva_0_global_rst_n_setf(chip, 0);
+ modem_gcu_ceva_ctrl_external_wait_setf(chip->cl_hw_tcv0, 1);
+ cmu_phy_0_rst_ceva_0_global_rst_n_setf(chip, 1);
+
+ cmu_phy_0_clk_en_ceva_0_clk_en_setf(chip, 1);
+ } else {
+ /* Even if only PHY1 is enabled we need to set CMU_PHY_0_MAIN_CLK_EN_BIT */
+ cmu_phy_0_clk_en_set(chip, CMU_PHY_0_MAIN_CLK_EN_BIT);
+ cmu_phy_1_clk_en_set(chip, CMU_PHY_1_APB_CLK_EN_BIT | CMU_PHY_1_MAIN_CLK_EN_BIT);
+
+ cmu_phy_1_rst_ceva_1_global_rst_n_setf(chip, 0);
+ modem_gcu_ceva_ctrl_external_wait_setf(chip->cl_hw_tcv1, 1);
+ cmu_phy_1_rst_ceva_1_global_rst_n_setf(chip, 1);
+
+ cmu_phy_1_clk_en_ceva_1_clk_en_setf(chip, 1);
+ }
+}
+
+static int cl_pll1_init(struct cl_chip *chip)
+{
+ int retry = 0;
+
+ /* Verify pll is locked */
+ while (!cmu_pll_1_stat_pll_lock_getf(chip) && (++retry < 10)) {
+ cl_dbg_chip_verbose(chip, "retry=%d\n", retry);
+ usleep_range(100, 200);
+ }
+
+ /* Pll is not locked - fatal error */
+ if (retry == 10) {
+ cl_dbg_chip_err(chip, "retry limit reached - pll1 is not locked !!!\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int cl_cmu_init(struct cl_chip *chip)
+{
+ int ret = 0;
+
+ if (IS_REAL_PHY(chip)) {
+ ret = cl_pll1_init(chip);
+ if (ret)
+ return ret;
+ }
+
+ /* Set gl_mux_sel bit to work with pll1 instead of crystal */
+ cmu_control_gl_mux_sel_setf(chip, 1);
+
+ if (cl_chip_is_both_enabled(chip)) {
+ cmu_rst_n_ricurst_setf(chip, 1);
+ cmu_phy_0_rst_set(chip, CMU_PHY0_RST_EN);
+ cmu_phy_1_rst_set(chip, CMU_PHY1_RST_EN);
+ cmu_phy_0_rst_set(chip, 0x0);
+ cmu_phy_1_rst_set(chip, 0x0);
+ cmu_rst_n_ricurst_setf(chip, 1);
+ cmu_phy_0_rst_set(chip, CMU_PHY0_RST_EN);
+ cmu_phy_1_rst_set(chip, CMU_PHY1_RST_EN);
+ } else if (cl_chip_is_tcv0_enabled(chip)) {
+ cmu_rst_n_ricurst_setf(chip, 1);
+ cmu_phy_0_rst_set(chip, CMU_PHY0_RST_EN);
+ cmu_phy_0_rst_set(chip, 0x0);
+ cmu_rst_n_ricurst_setf(chip, 1);
+ cmu_phy_0_rst_set(chip, CMU_PHY0_RST_EN);
+ } else {
+ cmu_rst_n_ricurst_setf(chip, 1);
+ cmu_phy_1_rst_set(chip, CMU_PHY1_RST_EN);
+ cmu_phy_1_rst_set(chip, 0x0);
+ cmu_rst_n_ricurst_setf(chip, 1);
+ cmu_phy_1_rst_set(chip, CMU_PHY1_RST_EN);
+ }
+
+ return ret;
+}
+
+static void cl_riu_clk_bw_init(struct cl_hw *cl_hw)
+{
+ u32 regval;
+
+ if (!cl_hw)
+ return;
+
+ switch (cl_hw->conf->ci_cap_bandwidth) {
+ case CHNL_BW_20:
+ regval = 0x00000100;
+ break;
+ case CHNL_BW_40:
+ regval = 0x00000555;
+ break;
+ case CHNL_BW_160:
+ regval = 0x00000dff;
+ break;
+ case CHNL_BW_80:
+ default:
+ regval = 0x000009aa;
+ break;
+ }
+
+ /* Set RIU modules clock BW */
+ modem_gcu_riu_clk_bw_set(cl_hw, regval);
+}
+
+static int cl_load_riu_rsf_memory(struct cl_chip *chip, const char *filename)
+{
+ struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
+ struct cl_hw *cl_hw_tcv1 = chip->cl_hw_tcv1;
+ char *buf = NULL;
+ loff_t size, i = 0;
+ int ret = 0;
+
+ size = cl_file_open_and_read(chip, filename, &buf);
+
+ if (!buf)
+ return -ENOMEM;
+
+ if (size > RIU_RSF_FILE_SIZE) {
+ ret = -EFBIG;
+ goto out;
+ }
+
+ /* Enable re-sampling filter init. */
+ riu_rsf_control_rsf_init_en_setf(cl_hw_tcv0, 0x1);
+ if (cl_hw_tcv1)
+ riu_rsf_control_rsf_init_en_setf(cl_hw_tcv1, 0x1);
+
+ while (i < size) {
+ riu_rsf_init_set(cl_hw_tcv0, *(u32 *)&buf[i]);
+ if (cl_hw_tcv1)
+ riu_rsf_init_set(cl_hw_tcv1, *(u32 *)&buf[i]);
+ i += 4;
+ }
+
+out:
+ kfree(buf);
+ return ret;
+}
+
+static int cl_load_riu_rsf_memory_recovery(struct cl_hw *cl_hw, const char *filename)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ char *buf = NULL;
+ loff_t size, i = 0;
+ int ret = 0;
+
+ size = cl_file_open_and_read(chip, filename, &buf);
+
+ if (!buf)
+ return -ENOMEM;
+
+ if (size > RIU_RSF_FILE_SIZE) {
+ ret = -EFBIG;
+ goto out;
+ }
+
+ /* Enable re-sampling filter init. */
+ riu_rsf_control_rsf_init_en_setf(cl_hw, 0x1);
+
+ while (i < size) {
+ riu_rsf_init_set(cl_hw, *(u32 *)&buf[i]);
+ i += 4;
+ }
+
+out:
+ kfree(buf);
+ return ret;
+}
+
+static int cl_load_agc_data(struct cl_hw *cl_hw, const char *filename)
+{
+ struct cl_chip *chip = cl_hw->chip;
+ char *buf = NULL;
+ loff_t size, i = 0;
+
+ size = cl_file_open_and_read(chip, filename, &buf);
+
+ if (!buf)
+ return -ENOMEM;
+
+ /* Enable AGC FSM ram init state */
+ riu_agcfsm_ram_init_1_agc_fsm_ram_init_en_setf(cl_hw, 0x1);
+ /* Start writing the firmware from WPTR=0 */
+ riu_agcfsm_ram_init_1_agc_fsm_ram_init_wptr_setf(cl_hw, 0x0);
+ /* Allow WPTR register to be writable */
+ riu_agcfsm_ram_init_1_agc_fsm_ram_init_wptr_set_setf(cl_hw, 0x1);
+ /* Enable auto increment WPTR by 1 upon any write */
+ riu_agcfsm_ram_init_1_agc_fsm_ram_init_ainc_1_setf(cl_hw, 0x1);
+
+ while (i < size) {
+ riu_agcfsm_ram_init_2_set(cl_hw, *(u32 *)&buf[i]);
+ i += 4;
+ }
+
+ /* Disable AGC FSM ram init state */
+ riu_agcfsm_ram_init_1_agc_fsm_ram_init_en_setf(cl_hw, 0x0);
+
+ kfree(buf);
+
+ return 0;
+}
+
+static int cl_load_agc_fw(struct cl_hw *cl_hw, const char *filename)
+{
+ int ret = 0;
+
+ if (!cl_hw)
+ goto out;
+
+ /* Switch AGC to programming mode */
+
+ /* Disable RIU Modules clocks (RC,LB,ModemB,FE,ADC,Regs,AGC,Radar) */
+ modem_gcu_riu_clk_set(cl_hw, 0);
+
+ /* Switch AGC MEM clock to 480MHz */
+ modem_gcu_riu_clk_bw_agc_mem_clk_bw_setf(cl_hw, 3);
+
+ /* Enable RIU Modules clocks (RC,LB,ModemB,FE,ADC,Regs,AGC,Radar) */
+ modem_gcu_riu_clk_set(cl_hw, 0xFFFFFFFF);
+
+ /* Assert AGC FSM reset */
+ riu_rwnxagccntl_agcfsmreset_setf(cl_hw, 1);
+
+ /* Load AGC RAM data */
+ ret = cl_load_agc_data(cl_hw, filename);
+ if (ret)
+ goto out;
+
+ /* Switch AGC back to operational mode */
+
+ /* Disable RIU Modules clocks (RC, LB, ModemB, FE, ADC, Regs, AGC, Radar) */
+ modem_gcu_riu_clk_set(cl_hw, 0);
+ /* Switch AGC MEM clock back to 80M */
+ modem_gcu_riu_clk_bw_agc_mem_clk_bw_setf(cl_hw, 1);
+ /* Enable RIU Modules clocks (RC, LB, ModemB, FE, ADC, Regs, AGC, Radar) */
+ modem_gcu_riu_clk_set(cl_hw, 0xFFFFFFFF);
+
+ /* Release AGC FSM reset */
+ riu_rwnxagccntl_agcfsmreset_setf(cl_hw, 0);
+
+out:
+ return ret;
+}
+
+int cl_radio_boot(struct cl_chip *chip)
+{
+ int ret = 0;
+ struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
+ struct cl_hw *cl_hw_tcv1 = chip->cl_hw_tcv1;
+ bool tcv0_enabled = cl_chip_is_tcv0_enabled(chip);
+ bool tcv1_enabled = cl_chip_is_tcv1_enabled(chip);
+
+ /* Call only once per chip after ASIC reset - configure both phys */
+ ret = cl_cmu_init(chip);
+ if (ret != 0)
+ goto out;
+
+ cl_clk_init(chip);
+ cmu_phase_sel_set(chip, (CMU_GP_CLK_PHASE_SEL_BIT |
+ CMU_DAC_CDB_CLK_PHASE_SEL_BIT |
+ CMU_DAC_CLK_PHASE_SEL_BIT) &
+ ~(CMU_ADC_CDB_CLK_PHASE_SEL_BIT |
+ CMU_ADC_CLK_PHASE_SEL_BIT));
+
+ if (tcv0_enabled) {
+ mac_hw_mac_cntrl_1_active_clk_gating_setf(cl_hw_tcv0, 1); /* Disable MPIF clock */
+ mac_hw_state_cntrl_next_state_setf(cl_hw_tcv0, 2); /* Move to doze */
+ }
+
+ if (tcv1_enabled) {
+ mac_hw_mac_cntrl_1_active_clk_gating_setf(cl_hw_tcv1, 1); /* Disable MPIF clock */
+ mac_hw_state_cntrl_next_state_setf(cl_hw_tcv1, 2); /* Move to doze */
+ }
+
+ /* Enable all PHY modules */
+ cl_phy_enable(cl_hw_tcv0);
+ cl_phy_enable(cl_hw_tcv1);
+
+ if (tcv0_enabled) {
+ mac_hw_doze_cntrl_2_wake_up_sw_setf(cl_hw_tcv0, 1); /* Exit from doze */
+ mac_hw_state_cntrl_next_state_setf(cl_hw_tcv0, 0); /* Move to idle */
+ }
+
+ if (tcv1_enabled) {
+ mac_hw_doze_cntrl_2_wake_up_sw_setf(cl_hw_tcv1, 1); /* Exit from doze */
+ mac_hw_state_cntrl_next_state_setf(cl_hw_tcv1, 0); /* Move to idle */
+ }
+
+ cl_riu_clk_bw_init(cl_hw_tcv0);
+ cl_riu_clk_bw_init(cl_hw_tcv1);
+
+ /* Load RIU re-sampling filter memory with coefficients */
+ ret = cl_load_riu_rsf_memory(chip, "riu_rsf.bin");
+ if (ret != 0) {
+ pr_err("cl_load_riu_rsf_memory failed with error code %d.\n", ret);
+ goto out;
+ }
+
+ /* Load AGC FW */
+ ret = cl_load_agc_fw(cl_hw_tcv0, "agcram.bin");
+ if (ret) {
+ pr_err("cl_load_agc_fw failed for tcv0 with error code %d.\n", ret);
+ goto out;
+ }
+
+ ret = cl_load_agc_fw(cl_hw_tcv1, "agcram.bin");
+ if (ret) {
+ pr_err("cl_load_agc_fw failed for tcv1 with error code %d.\n", ret);
+ goto out;
+ }
+
+ /* AFE Registers configuration */
+ ret = cl_afe_cfg(chip);
+
+out:
+ return ret;
+}
+
+static void cl_restore_ela_state(struct cl_hw *cl_hw)
+{
+ struct cl_recovery_db *recovery_db = &cl_hw->recovery_db;
+
+ /* Restore eLA state after MAC-HW reset */
+ if (recovery_db->ela_en) {
+ mac_hw_debug_port_sel_a_set(cl_hw, recovery_db->ela_sel_a);
+ mac_hw_debug_port_sel_b_set(cl_hw, recovery_db->ela_sel_b);
+ mac_hw_debug_port_sel_c_set(cl_hw, recovery_db->ela_sel_c);
+ }
+
+ mac_hw_debug_port_en_set(cl_hw, recovery_db->ela_en);
+}
+
+int cl_radio_boot_recovery(struct cl_hw *cl_hw)
+{
+ int ret = 0;
+
+ mac_hw_mac_cntrl_1_active_clk_gating_setf(cl_hw, 1); /* Disable MPIF clock */
+ mac_hw_state_cntrl_next_state_setf(cl_hw, 2); /* Move to doze */
+
+ /* Enable all PHY modules */
+ cl_phy_enable(cl_hw);
+
+ /* Restart LCU recording */
+ if (cl_hw_is_tcv0(cl_hw))
+ lcu_phy_lcu_ch_0_stop_set(cl_hw, 0);
+ else
+ lcu_phy_lcu_ch_1_stop_set(cl_hw, 0);
+
+ cl_restore_ela_state(cl_hw);
+
+ mac_hw_doze_cntrl_2_wake_up_sw_setf(cl_hw, 1); /* Exit from doze */
+ mac_hw_state_cntrl_next_state_setf(cl_hw, 0); /* Move to idle */
+
+ cl_riu_clk_bw_init(cl_hw);
+
+ /* Load RIU re-sampling filter memory with coefficients */
+ ret = cl_load_riu_rsf_memory_recovery(cl_hw, "riu_rsf.bin");
+ if (ret != 0) {
+ pr_err("cl_load_riu_rsf_memory failed with error code %d.\n", ret);
+ goto out;
+ }
+
+ /* Load AGC FW */
+ ret = cl_load_agc_fw(cl_hw, "agcram.bin");
+ if (ret) {
+ pr_err("cl_load_agc_fw failed for with error code %d.\n", ret);
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+#define NOISE_MAX_ANT_PER_REG 4
+
+void cl_noise_init(struct cl_hw *cl_hw)
+{
+ struct cl_noise_db *noise_db = &cl_hw->noise_db;
+
+ INIT_LIST_HEAD(&noise_db->reg_list);
+}
+
+void cl_noise_close(struct cl_hw *cl_hw)
+{
+ struct cl_noise_db *noise_db = &cl_hw->noise_db;
+ struct cl_noise_reg *elem = NULL;
+ struct cl_noise_reg *tmp = NULL;
+
+ list_for_each_entry_safe(elem, tmp, &noise_db->reg_list, list) {
+ list_del(&elem->list);
+ kfree(elem);
+ }
+}
+
+void cl_noise_maintenance(struct cl_hw *cl_hw)
+{
+ struct cl_noise_db *noise_db = &cl_hw->noise_db;
+ struct cl_noise_reg *reg = NULL;
+ u8 ch_bw = cl_hw->bw;
+
+ if (noise_db->sample_cnt == 0)
+ return;
+
+ reg = kzalloc(sizeof(*reg), GFP_ATOMIC);
+
+ if (!reg)
+ return;
+
+ /*collect statistics */
+ reg->np_prim20_per_ant = riu_agcinbdpow_20_pnoisestat_get(cl_hw);
+ reg->np_sub20_per_chn = riu_agcinbdpownoiseper_20_stat_0_get(cl_hw);
+ reg->np_sec20_dens_per_ant = riu_agcinbdpowsecnoisestat_get(cl_hw);
+ reg->nasp_prim20_per_ant = riu_inbdpowformac_0_get(cl_hw);
+ reg->nasp_sub20_per_chn = riu_inbdpowformac_3_get(cl_hw);
+ reg->nasp_sec20_dens_per_ant = riu_inbdpowformac_2_get(cl_hw);
+
+ if (ch_bw == CHNL_BW_160) {
+ reg->np_sub20_per_chn2 = riu_agcinbdpownoiseper_20_stat_1_get(cl_hw);
+ reg->nasp_sub20_per_chn2 = riu_inbdpowformac_4_get(cl_hw);
+ }
+
+ if (cl_hw->num_antennas > NOISE_MAX_ANT_PER_REG) {
+ reg->np_prim20_per_ant2 = riu_agcinbdpow_20_pnoisestat_2_get(cl_hw);
+ reg->nasp_prim20_per_ant2 = riu_inbdpowformac_1_get(cl_hw);
+ }
+
+ list_add(&reg->list, &noise_db->reg_list);
+
+ noise_db->sample_cnt--;
+
+ if (noise_db->sample_cnt == 0)
+ pr_debug("record done\n");
+}
+
+/** DOC: RSSI tracking
+ *
+ * RSSI values of association packets (request in AP mode and respone in STA mode)
+ * are not added to rssi pool sample, because at this stage station is not added
+ * to driver database.
+ * RSSI of association is important for WRS in order to select its initial rate.
+ * The goal of this code is to save MAC address and RSSI values of all association
+ * packets, and after station fully connects, search for the correct RSSI and add
+ * it to the rssi pool sample.
+ */
+struct assoc_queue_elem {
+ struct list_head list;
+ u8 addr[ETH_ALEN];
+ s8 rssi[MAX_ANTENNAS];
+ unsigned long timestamp;
+};
+
+#define CL_RSSI_LIFETIME_MS 5000
+
+static void cl_rssi_add_to_wrs(struct cl_hw *cl_hw, struct cl_sta *cl_sta, s8 rssi[MAX_ANTENNAS])
+{
+ struct cl_wrs_rssi *wrs_rssi = &cl_sta->wrs_rssi;
+ int i = 0;
+
+ for (i = 0; i < cl_hw->num_antennas; i++)
+ wrs_rssi->sum[i] += rssi[i];
+
+ wrs_rssi->cnt++;
+}
+
+void cl_rssi_assoc_init(struct cl_hw *cl_hw)
+{
+ INIT_LIST_HEAD(&cl_hw->assoc_queue.list);
+ spin_lock_init(&cl_hw->assoc_queue.lock);
+}
+
+void cl_rssi_assoc_exit(struct cl_hw *cl_hw)
+{
+ /* Delete all remaining elements in list */
+ spin_lock_bh(&cl_hw->assoc_queue.lock);
+
+ if (!list_empty(&cl_hw->assoc_queue.list)) {
+ struct assoc_queue_elem *elem = NULL;
+ struct assoc_queue_elem *tmp = NULL;
+
+ list_for_each_entry_safe(elem, tmp, &cl_hw->assoc_queue.list, list) {
+ list_del(&elem->list);
+ kfree(elem);
+ }
+ }
+
+ spin_unlock_bh(&cl_hw->assoc_queue.lock);
+}
+
+void cl_rssi_assoc_handle(struct cl_hw *cl_hw, u8 *mac_addr, struct hw_rxhdr *rxhdr)
+{
+ /* Allocate new element and add to list */
+ struct assoc_queue_elem *elem = kmalloc(sizeof(*elem), GFP_ATOMIC);
+
+ if (elem) {
+ INIT_LIST_HEAD(&elem->list);
+ cl_mac_addr_copy(elem->addr, mac_addr);
+
+ elem->rssi[0] = (s8)rxhdr->rssi1;
+ elem->rssi[1] = (s8)rxhdr->rssi2;
+ elem->rssi[2] = (s8)rxhdr->rssi3;
+ elem->rssi[3] = (s8)rxhdr->rssi4;
+ elem->rssi[4] = (s8)rxhdr->rssi5;
+ elem->rssi[5] = (s8)rxhdr->rssi6;
+
+ elem->timestamp = jiffies;
+
+ spin_lock(&cl_hw->assoc_queue.lock);
+ list_add(&elem->list, &cl_hw->assoc_queue.list);
+ spin_unlock(&cl_hw->assoc_queue.lock);
+ }
+}
+
+void cl_rssi_assoc_find(struct cl_hw *cl_hw, struct cl_sta *cl_sta, u8 num_sta)
+{
+ /* Search for rssi of association according to mac address */
+ spin_lock_bh(&cl_hw->assoc_queue.lock);
+
+ if (!list_empty(&cl_hw->assoc_queue.list)) {
+ unsigned long lifetime = 0;
+ struct assoc_queue_elem *elem = NULL;
+ struct assoc_queue_elem *tmp = NULL;
+
+ list_for_each_entry_safe(elem, tmp, &cl_hw->assoc_queue.list, list) {
+ lifetime = jiffies_to_msecs(jiffies - elem->timestamp);
+
+ /* Check lifetime of rssi before using it */
+ if (lifetime > CL_RSSI_LIFETIME_MS) {
+ /* Delete element from list */
+ list_del(&elem->list);
+ kfree(elem);
+ continue;
+ }
+
+ if (ether_addr_equal(elem->addr, cl_sta->addr)) {
+ struct hw_rxhdr rxhdr;
+ s8 equivalent_rssi = cl_rssi_calc_equivalent(cl_hw, elem->rssi);
+
+ rxhdr.rssi1 = elem->rssi[0];
+ rxhdr.rssi2 = elem->rssi[1];
+ rxhdr.rssi3 = elem->rssi[2];
+ rxhdr.rssi4 = elem->rssi[3];
+ rxhdr.rssi5 = elem->rssi[4];
+ rxhdr.rssi6 = elem->rssi[5];
+
+ cl_rssi_rx_handler(cl_hw, cl_sta, &rxhdr, equivalent_rssi);
+
+ /* Delete element from list */
+ list_del(&elem->list);
+ kfree(elem);
+ }
+ }
+ }
+
+ spin_unlock_bh(&cl_hw->assoc_queue.lock);
+}
+
+void cl_rssi_sort_descending(s8 rssi[MAX_ANTENNAS], u8 num_ant)
+{
+ int i, j;
+
+ for (i = 0; i < num_ant - 1; i++)
+ for (j = 0; j < num_ant - i - 1; j++)
+ if (rssi[j] < rssi[j + 1])
+ swap(rssi[j], rssi[j + 1]);
+}
+
+static s8 cl_rssi_equivalent_2_phys(s8 rssi_max, s8 rssi_min)
+{
+ s8 rssi_diff = rssi_min - rssi_max;
+
+ if (rssi_diff > (-2))
+ return (rssi_max + 3);
+ else if (rssi_diff > (-5))
+ return (rssi_max + 2);
+ else if (rssi_diff > (-9))
+ return (rssi_max + 1);
+ else
+ return rssi_max;
+}
+
+s8 cl_rssi_calc_equivalent(struct cl_hw *cl_hw, s8 rssi[MAX_ANTENNAS])
+{
+ s8 rssi_tmp[MAX_ANTENNAS] = {0};
+ u8 rx_ant = cl_hw->num_antennas;
+ int i, j;
+
+ /* Copy to rssi_tmp */
+ memcpy(rssi_tmp, rssi, rx_ant);
+
+ /* Sort the rssi's in desceding order */
+ cl_rssi_sort_descending(rssi_tmp, rx_ant);
+
+ /*
+ * 1) Calc equivalent rssi between the two lowest values.
+ * 2) Sort again
+ * 3) Repeat steps 1 and 2 according to number of antennas.
+ */
+ for (i = 0; i < rx_ant - 1; i++) {
+ rssi_tmp[rx_ant - i - 2] = cl_rssi_equivalent_2_phys(rssi_tmp[rx_ant - i - 2],
+ rssi_tmp[rx_ant - i - 1]);
+
+ for (j = rx_ant - i - 2; j > 0; j--) {
+ if (rssi_tmp[j] > rssi_tmp[j - 1])
+ swap(rssi_tmp[j], rssi_tmp[j - 1]);
+ else
+ break;
+ }
+ }
+
+ return rssi_tmp[0];
+}
+
+s8 cl_rssi_get_strongest(struct cl_hw *cl_hw, s8 rssi[MAX_ANTENNAS])
+{
+ int i;
+ s8 strongest_rssi = S8_MIN;
+
+ for (i = 0; i < cl_hw->num_antennas; i++) {
+ if (rssi[i] > strongest_rssi)
+ strongest_rssi = rssi[i];
+ }
+
+ return strongest_rssi;
+}
+
+static void cl_update_sta_rssi(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ s8 rssi[MAX_ANTENNAS], s8 equivalent_rssi)
+{
+ /* Last RSSI */
+ memcpy(cl_sta->last_rssi, rssi, cl_hw->num_antennas);
+
+ if (cl_sta->manual_alpha_rssi)
+ return;
+
+ /* Alpha RSSI - use alpha filter (87.5% current + 12.5% new) */
+ if (cl_sta->alpha_rssi)
+ cl_sta->alpha_rssi =
+ ((cl_sta->alpha_rssi << 3) - cl_sta->alpha_rssi + equivalent_rssi) >> 3;
+ else
+ cl_sta->alpha_rssi = equivalent_rssi;
+}
+
+static bool cl_rssi_is_valid_ru_type_factor(u8 ru_type)
+{
+ return (ru_type >= CL_MU_OFDMA_RU_TYPE_26 && ru_type <= CL_MU_OFDMA_RU_TYPE_106);
+}
+
+static void cl_rssi_agg_ind_rssi_values_fill(struct cl_hw *cl_hw,
+ struct cl_agg_tx_report *agg_report,
+ s8 *rssi_buf)
+{
+ /* Fill the rssi buffer with the correct rssi values */
+ switch (cl_hw->first_riu_chain) {
+ case 0:
+ rssi_buf[0] = agg_report->rssi1;
+ rssi_buf[1] = agg_report->rssi2;
+ rssi_buf[2] = agg_report->rssi3;
+ rssi_buf[3] = agg_report->rssi4;
+ rssi_buf[4] = agg_report->rssi5;
+ rssi_buf[5] = agg_report->rssi6;
+ break;
+ case 1:
+ rssi_buf[0] = agg_report->rssi2;
+ rssi_buf[1] = agg_report->rssi3;
+ rssi_buf[2] = agg_report->rssi4;
+ rssi_buf[3] = agg_report->rssi5;
+ rssi_buf[4] = agg_report->rssi6;
+ break;
+ case 2:
+ rssi_buf[0] = agg_report->rssi3;
+ rssi_buf[1] = agg_report->rssi4;
+ rssi_buf[2] = agg_report->rssi5;
+ rssi_buf[3] = agg_report->rssi6;
+ break;
+ case 3:
+ rssi_buf[0] = agg_report->rssi4;
+ rssi_buf[1] = agg_report->rssi5;
+ rssi_buf[2] = agg_report->rssi6;
+ break;
+ case 4:
+ rssi_buf[0] = agg_report->rssi5;
+ rssi_buf[1] = agg_report->rssi6;
+ break;
+ case 5:
+ rssi_buf[0] = agg_report->rssi6;
+ break;
+ default:
+ break;
+ }
+}
+
+void cl_rssi_block_ack_handler(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct cl_agg_tx_report *agg_report)
+{
+ /* Handle RSSI of block-ack's */
+ union cl_rate_ctrl_info rate_ctrl_info = {
+ .word = le32_to_cpu(agg_report->rate_cntrl_info)};
+ u8 bw = rate_ctrl_info.field.bw;
+
+ s8 rssi[MAX_ANTENNAS];
+ s8 equivalent_rssi;
+ int i;
+
+ cl_rssi_agg_ind_rssi_values_fill(cl_hw, agg_report, rssi);
+
+ if (cl_hw->rssi_simulate)
+ for (i = 0; i < cl_hw->num_antennas; i++)
+ rssi[i] = cl_hw->rssi_simulate;
+
+ if (!cl_hw->rssi_simulate) {
+ union cl_rate_ctrl_info_he rate_ctrl_info_he = {
+ .word = le32_to_cpu(agg_report->rate_cntrl_info_he)};
+ u8 ru_type = rate_ctrl_info_he.field.ru_type;
+ u8 format_mode = rate_ctrl_info.field.format_mod;
+ s8 bw_factor = 0;
+
+ /*
+ * RSSI adjustment according to BW
+ * The BA is transmitted in the BW of the aggregation it acknowledges
+ */
+ if (format_mode == FORMATMOD_HE_MU &&
+ cl_rssi_is_valid_ru_type_factor(ru_type)) {
+ if (ru_type == CL_MU_OFDMA_RU_TYPE_26)
+ bw_factor = -9;
+ else if (ru_type == CL_MU_OFDMA_RU_TYPE_52)
+ bw_factor = -6;
+ else if (ru_type == CL_MU_OFDMA_RU_TYPE_106)
+ bw_factor = -3;
+ } else {
+ if (bw == CHNL_BW_160)
+ bw_factor = 9;
+ else if (bw == CHNL_BW_80)
+ bw_factor = 6;
+ else if (bw == CHNL_BW_40)
+ bw_factor = 3;
+ }
+
+ for (i = 0; i < cl_hw->num_antennas; i++)
+ rssi[i] += bw_factor;
+ }
+
+ equivalent_rssi = cl_rssi_calc_equivalent(cl_hw, rssi);
+
+ /* Handle RSSI after BW adjustment */
+ cl_rssi_add_to_wrs(cl_hw, cl_sta, rssi);
+ cl_stats_update_rx_rssi(cl_hw, cl_sta, rssi);
+ cl_vns_handle_rssi(cl_hw, cl_sta, rssi);
+ cl_update_sta_rssi(cl_hw, cl_sta, rssi, equivalent_rssi);
+ cl_motion_sense_rssi_ba(cl_hw, cl_sta, rssi);
+}
+
+void cl_rssi_rx_handler(struct cl_hw *cl_hw, struct cl_sta *cl_sta,
+ struct hw_rxhdr *rxhdr, s8 equivalent_rssi)
+{
+ /* Called after BW adjustment */
+ s8 rssi[MAX_ANTENNAS] = RX_HDR_RSSI(rxhdr);
+
+ if (!IS_REAL_PHY(cl_hw->chip) && rssi[0] == 0)
+ return;
+
+ cl_rssi_add_to_wrs(cl_hw, cl_sta, rssi);
+ cl_stats_update_rx_rssi(cl_hw, cl_sta, rssi);
+ cl_vns_handle_rssi(cl_hw, cl_sta, rssi);
+ cl_update_sta_rssi(cl_hw, cl_sta, rssi, equivalent_rssi);
+}
+
+void cl_rssi_bw_adjust(struct cl_hw *cl_hw, struct hw_rxhdr *rxhdr, s8 bw_factor)
+{
+ if (cl_hw->rssi_simulate)
+ return;
+
+ rxhdr->rssi1 += bw_factor;
+ rxhdr->rssi2 += bw_factor;
+ rxhdr->rssi3 += bw_factor;
+ rxhdr->rssi4 += bw_factor;
+ rxhdr->rssi5 += bw_factor;
+ rxhdr->rssi6 += bw_factor;
+}
+
+void cl_rssi_simulate(struct cl_hw *cl_hw, struct hw_rxhdr *rxhdr)
+{
+ rxhdr->rssi1 = cl_hw->rssi_simulate;
+ rxhdr->rssi2 = cl_hw->rssi_simulate;
+ rxhdr->rssi3 = cl_hw->rssi_simulate;
+ rxhdr->rssi4 = cl_hw->rssi_simulate;
+ rxhdr->rssi5 = cl_hw->rssi_simulate;
+ rxhdr->rssi6 = cl_hw->rssi_simulate;
+}
+
+#define CCA_CNT_RATE_40MHZ 3
+
+static void cl_cca_reset_phy_counters(struct cl_hw *cl_hw)
+{
+ riu_rwnxagccca_1_cca_cnt_clear_setf(cl_hw, 1);
+ riu_rwnxagccca_1_cca_cnt_clear_setf(cl_hw, 0);
+}
+
+void cl_cca_init(struct cl_hw *cl_hw)
+{
+ /* Set PHY registers rate */
+ riu_rwnxagccca_1_cca_cnt_rate_setf(cl_hw, CCA_CNT_RATE_40MHZ);
+}
+
+void cl_cca_maintenance(struct cl_hw *cl_hw)
+{
+ struct cl_cca_db *cca_db = &cl_hw->cca_db;
+ unsigned long time = jiffies_to_usecs(jiffies);
+ unsigned long diff_time = time - cca_db->time;
+
+ cca_db->time = time;
+ if (!diff_time)
+ return;
+
+ /* Rest PHY counters */
+ cl_cca_reset_phy_counters(cl_hw);
+}
+
+void cl_prot_mode_init(struct cl_hw *cl_hw)
+{
+ struct cl_prot_mode *prot_mode = &cl_hw->prot_mode;
+ u8 init = cl_hw->conf->ce_prot_mode;
+
+ prot_mode->current_val = init;
+ prot_mode->default_val = init;
+ prot_mode->dynamic_val = (init != TXL_NO_PROT) ? init : TXL_PROT_RTS_FW;
+}
+
+void cl_prot_mode_set(struct cl_hw *cl_hw, u8 prot_mode_new)
+{
+ struct cl_prot_mode *prot_mode = &cl_hw->prot_mode;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+
+ if (prot_mode->current_val != prot_mode_new) {
+ prot_mode->current_val = prot_mode_new;
+ cl_msg_tx_prot_mode(cl_hw,
+ conf->ce_prot_log_nav_en,
+ prot_mode_new,
+ conf->ce_prot_rate_format,
+ conf->ce_prot_rate_mcs,
+ conf->ce_prot_rate_pre_type);
+ }
+}
+
+u8 cl_prot_mode_get(struct cl_hw *cl_hw)
+{
+ return cl_hw->prot_mode.current_val;
+}
+
+static u8 conv_to_fw_ac[EDCA_AC_MAX] = {
+ [EDCA_AC_BE] = AC_BE,
+ [EDCA_AC_BK] = AC_BK,
+ [EDCA_AC_VI] = AC_VI,
+ [EDCA_AC_VO] = AC_VO
+};
+
+static const char *edca_ac_str[EDCA_AC_MAX] = {
+ [EDCA_AC_BE] = "BE",
+ [EDCA_AC_BK] = "BK",
+ [EDCA_AC_VI] = "VI",
+ [EDCA_AC_VO] = "VO",
+};
+
+void cl_edca_hw_conf(struct cl_hw *cl_hw)
+{
+ u8 ac = 0;
+ struct cl_tcv_conf *conf = cl_hw->conf;
+
+ for (ac = 0; ac < AC_MAX; ac++) {
+ struct edca_params params = {
+ .aifsn = conf->ce_wmm_aifsn[ac],
+ .cw_min = conf->ce_wmm_cwmin[ac],
+ .cw_max = conf->ce_wmm_cwmax[ac],
+ .txop = conf->ce_wmm_txop[ac]
+ };
+
+ cl_edca_set(cl_hw, ac, &params, NULL);
+ }
+}
+
+void cl_edca_set(struct cl_hw *cl_hw, u8 ac, struct edca_params *params,
+ struct ieee80211_he_mu_edca_param_ac_rec *mu_edca)
+{
+ u32 edca_reg_val = 0;
+
+ if (ac >= AC_MAX) {
+ pr_err("%s: Invalid AC index\n", __func__);
+ return;
+ }
+
+ edca_reg_val = (u32)(params->aifsn);
+ edca_reg_val |= (u32)(params->cw_min << 4);
+ edca_reg_val |= (u32)(params->cw_max << 8);
+ edca_reg_val |= (u32)(params->txop << 12);
+
+ memcpy(&cl_hw->edca_db.hw_params[ac], params, sizeof(struct edca_params));
+
+ cl_msg_tx_set_edca(cl_hw, conv_to_fw_ac[ac], edca_reg_val, mu_edca);
+
+ cl_dbg_trace(cl_hw, "EDCA-%s: aifsn=%u, cw_min=%u, cw_max=%u, txop=%u\n",
+ edca_ac_str[ac], params->aifsn, params->cw_min,
+ params->cw_max, params->txop);
+}
+
+void cl_edca_recovery(struct cl_hw *cl_hw)
+{
+ u8 ac;
+
+ for (ac = 0; ac < AC_MAX; ac++)
+ cl_edca_set(cl_hw, ac, &cl_hw->edca_db.hw_params[ac], NULL);
+}
+
--
2.36.1


2022-05-26 22:05:43

by Johannes Berg

[permalink] [raw]
Subject: Re: [RFC v2 04/96] cl8k: add Makefile

On Tue, 2022-05-24 at 14:33 +0300, [email protected] wrote:
>
> +ccflags-y += -I$(src) -I$(srctree)/net/wireless -I$(srctree)/net/mac80211/
> +ccflags-y += -I$(src) -Werror

Neither of these should be here. *Maybe* -I$(src), but that's probably
not even needed.

> +
> +define cc_ver_cmp
> +$(shell [ "$$($(CC) -dumpversion | cut -d. -f1)" -$(1) "$(2)" ] && echo "true" || echo "false")
> +endef
> +
> +ifeq ($(call cc_ver_cmp,ge,8),true)
> +ccflags-y += -Wno-error=stringop-truncation
> +ccflags-y += -Wno-error=format-truncation
> +endif
> +
> +# Stop these C90 warnings. We use C99.
> +ccflags-y += -Wno-declaration-after-statement -g

No no, all of that needs to go, don't make up your own stuff here.

johannes

2022-05-26 22:12:54

by Johannes Berg

[permalink] [raw]
Subject: Re: [RFC v2 36/96] cl8k: add key.c

On Tue, 2022-05-24 at 14:34 +0300, [email protected] wrote:
>
> +static inline void cl_ccmp_hdr2pn(u8 *pn, u8 *hdr)
> +{
> + pn[0] = hdr[7];
> + pn[1] = hdr[6];
> + pn[2] = hdr[5];
> + pn[3] = hdr[4];
> + pn[4] = hdr[1];
> + pn[5] = hdr[0];
> +}
> +
> +static int cl_key_validate_pn(struct cl_hw *cl_hw, struct cl_sta *cl_sta, struct sk_buff *skb)
> +{
> + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
> + int hdrlen = 0, res = 0;
> + u8 pn[IEEE80211_CCMP_PN_LEN];
> + u8 tid = 0;
> +
> + hdrlen = ieee80211_hdrlen(hdr->frame_control);
> + tid = ieee80211_get_tid(hdr);
> +
> + cl_ccmp_hdr2pn(pn, skb->data + hdrlen);
> + res = memcmp(pn, cl_sta->rx_pn[tid], IEEE80211_CCMP_PN_LEN);
> + if (res < 0) {
> + cl_hw->rx_info.pkt_drop_invalid_pn++;
> + return -1;
> + }
> +
> + memcpy(cl_sta->rx_pn[tid], pn, IEEE80211_CCMP_PN_LEN);
> +
> + return 0;
> +}

Why do you do this stuff in the driver if it's effectively the same as
in mac80211?

johannes

2022-05-27 07:09:43

by Johannes Berg

[permalink] [raw]
Subject: Re: [RFC v2 38/96] cl8k: add mac80211.c

On Tue, 2022-05-24 at 14:34 +0300, [email protected] wrote:
>
> +static void cl_ops_tx_single(struct cl_hw *cl_hw,
> + struct sk_buff *skb,
> + struct ieee80211_tx_info *tx_info,
> + struct cl_sta *cl_sta,
> + struct ieee80211_sta *sta)
> +{

...

> + if (sta) {
> + u32 sta_vht_cap = sta->vht_cap.cap;
> + struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
> +
> + if (!(sta_vht_cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK))
> + goto out_tx;
> +
> + if (ieee80211_is_assoc_resp(mgmt->frame_control)) {
> + int len = skb->len - (mgmt->u.assoc_resp.variable - skb->data);
> + const u8 *vht_cap_addr = cfg80211_find_ie(WLAN_EID_VHT_CAPABILITY,
> + mgmt->u.assoc_resp.variable,
> + len);
> +
> + if (vht_cap_addr) {
> + struct ieee80211_vht_cap *vht_cap =
> + (struct ieee80211_vht_cap *)(2 + vht_cap_addr);
> +
> + vht_cap->vht_cap_info &=
> + ~(cpu_to_le32(IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK |
> + IEEE80211_VHT_CAP_SHORT_GI_160));
> + }

Huh??


> +int cl_ops_config(struct ieee80211_hw *hw, u32 changed)
> +{
> + /*
> + * Handler for configuration requests. IEEE 802.11 code calls this
> + * function to change hardware configuration, e.g., channel.
> + * This function should never fail but returns a negative error code
> + * if it does. The callback can sleep
> + */
> + int error = 0;
> +
> + if (changed & IEEE80211_CONF_CHANGE_CHANNEL)
> + error = cl_ops_conf_change_channel(hw);

I'm really surprised to see this callback in a modern driver - wouldn't
you want to support some form of multi-channel operation? Even just
using the chanctx callbacks might make some of the DFS things you have
there easier?


>
> +void cl_ops_bss_info_changed(struct ieee80211_hw *hw,
> + struct ieee80211_vif *vif,
> + struct ieee80211_bss_conf *info,
> + u32 changed)
> +{

...

> + if (beacon) {
> + size_t ies_len = beacon->tail_len;
> + const u8 *ies = beacon->tail;
> + const u8 *cap = NULL;
> + int var_offset = offsetof(struct ieee80211_mgmt, u.beacon.variable);
> + int len = beacon->head_len - var_offset;
> + const u8 *var_pos = beacon->head + var_offset;
> + const u8 *rate_ie = NULL;
> +
> + cl_vif->wmm_enabled = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
> + WLAN_OUI_TYPE_MICROSOFT_WMM,
> + ies,
> + ies_len);
> + cl_dbg_info(cl_hw, "vif=%d wmm_enabled=%d\n",
> + cl_vif->vif_index,
> + cl_vif->wmm_enabled);
> +
> + cap = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies, ies_len);
> + if (cap && cap[1] >= sizeof(*ht_cap)) {
> + ht_cap = (void *)(cap + 2);
> + sgi_en |= (le16_to_cpu(ht_cap->cap_info) &
> + IEEE80211_HT_CAP_SGI_20) ||
> + (le16_to_cpu(ht_cap->cap_info) &
> + IEEE80211_HT_CAP_SGI_40);
> + }
> +
> + cap = cfg80211_find_ie(WLAN_EID_VHT_CAPABILITY, ies, ies_len);
> + if (cap && cap[1] >= sizeof(*vht_cap)) {
> + vht_cap = (void *)(cap + 2);
> + sgi_en |= (le32_to_cpu(vht_cap->vht_cap_info) &
> + IEEE80211_VHT_CAP_SHORT_GI_80) ||
> + (le32_to_cpu(vht_cap->vht_cap_info) &
> + IEEE80211_VHT_CAP_SHORT_GI_160);
> + }
> +
> + cap = cfg80211_find_ext_ie(WLAN_EID_EXT_HE_CAPABILITY, ies, ies_len);
> + if (cap && cap[1] >= sizeof(*he_cap) + 1)
> + he_cap = (void *)(cap + 3);
> +
> + rate_ie = cfg80211_find_ie(WLAN_EID_SUPP_RATES, var_pos, len);
> + if (rate_ie) {
> + if (cl_band_is_24g(cl_hw))
> + if (cl_is_valid_g_rates(rate_ie))
> + hw_mode = cl_hw->conf->ci_cck_in_hw_mode ?
> + HW_MODE_BG : HW_MODE_G;
> + else
> + hw_mode = HW_MODE_B;
> + else
> + hw_mode = HW_MODE_A;
> + }
> + } else {
> + cl_dbg_warn(cl_hw, "beacon_data not set!\n");
> + }
> +

This feels ... odd. You really shouldn't have to look into the beacon to
figure out these things?

And SGI etc. are per-STA rate control parameters anyway? Hmm.

> +/* This function is required for PS flow - do not remove */
> +int cl_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set)
> +{
> + return 0;
> +}

You have all this hardware/firmware and you implement this? Interesting
design choice. One that I'm sure you'll revisit for WiFi 7 ;-)

johannes

2022-05-27 12:13:37

by Jeff Johnson

[permalink] [raw]
Subject: Re: [RFC v2 42/96] cl8k: add main.c

On 5/24/2022 4:34 AM, [email protected] wrote:
> From: Viktor Barna <[email protected]>
>
> (Part of the split. Please, take a look at the cover letter for more
> details).
>
> Signed-off-by: Viktor Barna <[email protected]>
> ---
> drivers/net/wireless/celeno/cl8k/main.c | 603 ++++++++++++++++++++++++
> 1 file changed, 603 insertions(+)
> create mode 100644 drivers/net/wireless/celeno/cl8k/main.c
>
> diff --git a/drivers/net/wireless/celeno/cl8k/main.c b/drivers/net/wireless/celeno/cl8k/main.c
> new file mode 100644
> index 000000000000..08abb16987ef
> --- /dev/null
> +++ b/drivers/net/wireless/celeno/cl8k/main.c
> @@ -0,0 +1,603 @@
> +// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
> +/* Copyright(c) 2019-2022, Celeno Communications Ltd. */
> +
> +#include "tx.h"
> +#include "reg/reg_access.h"
> +#include "reg/reg_defs.h"
> +#include "stats.h"
> +#include "maintenance.h"
> +#include "vns.h"
> +#include "traffic.h"
> +#include "sounding.h"
> +#include "recovery.h"
> +#include "rates.h"
> +#include "utils.h"
> +#include "phy.h"
> +#include "radio.h"
> +#include "dsp.h"
> +#include "dfs.h"
> +#include "tcv.h"
> +#include "mac_addr.h"
> +#include "bf.h"
> +#include "rfic.h"
> +#include "e2p.h"
> +#include "chip.h"
> +#include "regdom.h"
> +#include "platform.h"
> +#include "mac80211.h"
> +#include "main.h"
> +
> +MODULE_DESCRIPTION("Celeno 11ax driver for Linux");
> +MODULE_VERSION(CONFIG_CL8K_VERSION);
> +MODULE_AUTHOR("Copyright(c) 2022 Celeno Communications Ltd");
> +MODULE_LICENSE("Dual BSD/GPL");
> +
> +static struct ieee80211_ops cl_ops = {

const?

> + .tx = cl_ops_tx,
> + .start = cl_ops_start,
> + .stop = cl_ops_stop,
> + .add_interface = cl_ops_add_interface,
> + .remove_interface = cl_ops_remove_interface,
> + .config = cl_ops_config,
> + .bss_info_changed = cl_ops_bss_info_changed,
> + .start_ap = cl_ops_start_ap,
> + .stop_ap = cl_ops_stop_ap,
> + .prepare_multicast = cl_ops_prepare_multicast,
> + .configure_filter = cl_ops_configure_filter,
> + .set_key = cl_ops_set_key,
> + .sw_scan_start = cl_ops_sw_scan_start,
> + .sw_scan_complete = cl_ops_sw_scan_complete,
> + .sta_state = cl_ops_sta_state,
> + .sta_notify = cl_ops_sta_notify,
> + .conf_tx = cl_ops_conf_tx,
> + .sta_rc_update = cl_ops_sta_rc_update,
> + .ampdu_action = cl_ops_ampdu_action,
> + .post_channel_switch = cl_ops_post_channel_switch,
> + .flush = cl_ops_flush,
> + .tx_frames_pending = cl_ops_tx_frames_pending,
> + .reconfig_complete = cl_ops_reconfig_complete,
> + .get_txpower = cl_ops_get_txpower,
> + .set_rts_threshold = cl_ops_set_rts_threshold,
> + .event_callback = cl_ops_event_callback,
> + .set_tim = cl_ops_set_tim,
> + .get_antenna = cl_ops_get_antenna,
> + .get_expected_throughput = cl_ops_get_expected_throughput,
> + .sta_statistics = cl_ops_sta_statistics,
> + .get_survey = cl_ops_get_survey,
> + .hw_scan = cl_ops_hw_scan,
> + .cancel_hw_scan = cl_ops_cancel_hw_scan
> +};
> +
> +static void cl_drv_workqueue_create(struct cl_hw *cl_hw)
> +{
> + if (!cl_hw->drv_workqueue)
> + cl_hw->drv_workqueue = create_singlethread_workqueue("drv_workqueue");
> +}
> +
> +static void cl_drv_workqueue_destroy(struct cl_hw *cl_hw)
> +{
> + if (cl_hw->drv_workqueue) {
> + destroy_workqueue(cl_hw->drv_workqueue);
> + cl_hw->drv_workqueue = NULL;
> + }
> +}
> +
> +static int cl_main_alloc(struct cl_hw *cl_hw)
> +{
> + int ret = 0;

avoid initializers that are always overwritten, especially ones that are
overwritten by the very first line of code

> +
> + ret = cl_phy_data_alloc(cl_hw);
> + if (ret)
> + return ret;
> +
> + ret = cl_calib_common_tables_alloc(cl_hw);
> + if (ret)
> + return ret;
> +
> + ret = cl_power_table_alloc(cl_hw);
> + if (ret)
> + return ret;
> +
> + return ret;
> +}
> +
> +static void cl_main_free(struct cl_hw *cl_hw)
> +{
> + cl_phy_data_free(cl_hw);
> + cl_calib_common_tables_free(cl_hw);
> + cl_power_table_free(cl_hw);

hint: consider performing deinit steps in reverse order of init steps.
it may not always matter, but when it does, you'll be happy that you're
doing things consistently -- what happens when you try taking off your
socks before you take off your shoes? :)

> +}
> +
> +static void cl_free_hw(struct cl_hw *cl_hw)
> +{
> + if (!cl_hw)
> + return;
> +
> + cl_temperature_wait_for_measurement(cl_hw->chip, cl_hw->tcv_idx);
> +
> + cl_tcv_config_free(cl_hw);
> +
> + if (cl_hw->hw->wiphy->registered)
> + ieee80211_unregister_hw(cl_hw->hw);
> +
> + cl_chip_unset_hw(cl_hw->chip, cl_hw);
> + ieee80211_free_hw(cl_hw->hw);

I'm paranoid so I always set pointers to NULL when I've freed the
underlying data so that nothing else can later dereference them, and
potentially lead to use-after-free or double-free

> +}
> +
> +static void cl_free_chip(struct cl_chip *chip)
> +{
> + cl_free_hw(chip->cl_hw_tcv0);
> + cl_free_hw(chip->cl_hw_tcv1);
> +}
> +
> +static int cl_prepare_hw(struct cl_chip *chip, u8 tcv_idx,
> + const struct cl_driver_ops *drv_ops)
> +{
> + struct cl_hw *cl_hw = NULL;

another initializer that is immediately overwritten by the first line of
code

> + struct ieee80211_hw *hw;
> + int ret = 0;
> +
> + hw = ieee80211_alloc_hw(sizeof(struct cl_hw), &cl_ops);
> + if (!hw) {
> + cl_dbg_chip_err(chip, "ieee80211_alloc_hw failed\n");
> + return -ENOMEM;
> + }
> +
> + cl_hw_init(chip, hw->priv, tcv_idx);
> +
> + cl_hw = hw->priv;
> + cl_hw->hw = hw;
> + cl_hw->tcv_idx = tcv_idx;
> + cl_hw->sx_idx = chip->conf->ci_tcv1_chains_sx0 ? 0 : tcv_idx;
> + cl_hw->chip = chip;
> +
> + /*
> + * chip0, tcv0 --> 0
> + * chip0, tcv1 --> 1
> + * chip1, tcv0 --> 2
> + * chip1, tcv1 --> 3
> + */
> + cl_hw->idx = chip->idx * CHIP_MAX + tcv_idx;
> +
> + cl_hw->drv_ops = drv_ops;
> +
> + SET_IEEE80211_DEV(hw, chip->dev);
> +
> + ret = cl_tcv_config_alloc(cl_hw);
> + if (ret)
> + goto out_free_hw;
> +
> + ret = cl_tcv_config_read(cl_hw);
> + if (ret)
> + goto out_free_hw;
> +
> + cl_chip_set_hw(chip, cl_hw);
> +
> + ret = cl_mac_addr_set(cl_hw);
> + if (ret) {
> + cl_dbg_err(cl_hw, "cl_mac_addr_set failed\n");
> + goto out_free_hw;
> + }
> +
> + if (cl_band_is_6g(cl_hw))
> + cl_hw->nl_band = NL80211_BAND_6GHZ;
> + else if (cl_band_is_5g(cl_hw))
> + cl_hw->nl_band = NL80211_BAND_5GHZ;
> + else
> + cl_hw->nl_band = NL80211_BAND_2GHZ;
> +
> + if (cl_band_is_24g(cl_hw))
> + cl_hw->hw_mode = HW_MODE_BG;
> + else
> + cl_hw->hw_mode = HW_MODE_A;
> +
> + cl_hw->wireless_mode = WIRELESS_MODE_HT_VHT_HE;
> +
> + cl_cap_dyn_params(cl_hw);
> +
> + cl_dbg_verbose(cl_hw, "cl_hw created\n");
> +
> + return 0;
> +
> +out_free_hw:
> + cl_free_hw(cl_hw);
> +
> + return ret;
> +}
> +
> +void cl_main_off(struct cl_hw *cl_hw)
> +{
> + cl_irq_disable(cl_hw, cl_hw->ipc_e2a_irq.all);
> + cl_ipc_stop(cl_hw);
> +
> + if (!test_bit(CL_DEV_INIT, &cl_hw->drv_flags)) {
> + cl_tx_off(cl_hw);
> + cl_rx_off(cl_hw);
> + cl_msg_rx_flush_all(cl_hw);
> + }
> +
> + cl_fw_file_cleanup(cl_hw);
> +}
> +
> +static void _cl_main_deinit(struct cl_hw *cl_hw)
> +{
> + if (!cl_hw)
> + return;
> +
> + ieee80211_unregister_hw(cl_hw->hw);
> +
> + /* Send reset message to firmware */
> + cl_msg_tx_reset(cl_hw);
> +
> + cl_hw->is_stop_context = true;
> +
> + cl_drv_workqueue_destroy(cl_hw);
> +
> + cl_scanner_deinit(cl_hw);
> +
> + cl_noise_close(cl_hw);
> + cl_maintenance_close(cl_hw);
> + cl_vns_close(cl_hw);
> + cl_rssi_assoc_exit(cl_hw);
> + cl_radar_close(cl_hw);
> + cl_sounding_close(cl_hw);
> + cl_chan_info_deinit(cl_hw);
> + cl_wrs_api_close(cl_hw);
> + cl_main_off(cl_hw);
> + /* These 2 must be called after cl_tx_off() (which is called from cl_main_off) */
> + cl_tx_amsdu_txhdr_deinit(cl_hw);
> + cl_sw_txhdr_deinit(cl_hw);
> + cl_fw_dbg_trigger_based_deinit(cl_hw);
> + cl_stats_deinit(cl_hw);
> + cl_main_free(cl_hw);
> + cl_fw_file_release(cl_hw);
> +
> + cl_ipc_deinit(cl_hw);
> + cl_hw_deinit(cl_hw, cl_hw->tcv_idx);
> + vfree(cl_hw->tx_queues);
> +}
> +
> +void cl_main_deinit(struct cl_chip *chip)
> +{
> + struct cl_chip_conf *conf = chip->conf;
> + struct cl_hw *cl_hw_tcv0 = chip->cl_hw_tcv0;
> + struct cl_hw *cl_hw_tcv1 = chip->cl_hw_tcv1;
> +
> + if (cl_chip_is_tcv1_enabled(chip) && cl_hw_tcv1)
> + _cl_main_deinit(cl_hw_tcv1);
> +
> + if (cl_chip_is_tcv0_enabled(chip) && cl_hw_tcv0)
> + _cl_main_deinit(cl_hw_tcv0);
> +
> + if (conf->ci_phy_dev != PHY_DEV_DUMMY) {
> + if (!conf->ci_phy_load_bootdrv)
> + cl_phy_off(cl_hw_tcv1);
> +
> + cl_phy_off(cl_hw_tcv0);
> + }
> +
> + cl_platform_dealloc(chip);
> +
> + cl_free_chip(chip);
> +}
> +
> +static struct cl_controller_reg all_controller_reg = {

if this is read-only then consider making it const

> + .breset = XMAC_BRESET,
> + .debug_enable = XMAC_DEBUG_ENABLE,
> + .dreset = XMAC_DRESET,
> + .ocd_halt_on_reset = XMAC_OCD_HALT_ON_RESET,
> + .run_stall = XMAC_RUN_STALL
> +};
> +
> +void cl_main_reset(struct cl_chip *chip, struct cl_controller_reg *controller_reg)

would need to add const to 2nd param if you make the table const.
even if you didn't make the table const, it is good form to declare
pointer params as const if you don't write back into the struct

> +{
> + /* Release TRST & BReset to enable JTAG connection to FPGA A */
> + u32 regval;
> +
> + /* 1. return to reset value */
> + regval = macsys_gcu_xt_control_get(chip);
> + regval |= controller_reg->ocd_halt_on_reset;
> + regval &= ~(controller_reg->dreset | controller_reg->run_stall | controller_reg->breset);
> + macsys_gcu_xt_control_set(chip, regval);
> +
> + regval = macsys_gcu_xt_control_get(chip);
> + regval |= controller_reg->dreset;
> + macsys_gcu_xt_control_set(chip, regval);
> +
> + /* 2. stall xtensa & release ocd */
> + regval = macsys_gcu_xt_control_get(chip);
> + regval |= controller_reg->run_stall;
> + regval &= ~controller_reg->ocd_halt_on_reset;
> + macsys_gcu_xt_control_set(chip, regval);
> +
> + /* 3. breset release & debug enable */
> + regval = macsys_gcu_xt_control_get(chip);
> + regval |= (controller_reg->debug_enable | controller_reg->breset);
> + macsys_gcu_xt_control_set(chip, regval);
> +
> + msleep(100);
> +}
> +
> +int cl_main_on(struct cl_hw *cl_hw)
> +{
> + struct cl_chip *chip = cl_hw->chip;
> + int ret;
> + u32 regval;
> +
> + cl_hw->fw_active = false;
> +
> + cl_txq_init(cl_hw);
> +
> + cl_hw_assert_info_init(cl_hw);
> +
> + if (cl_recovery_in_progress(cl_hw))
> + cl_main_reset(chip, &cl_hw->controller_reg);
> +
> + ret = cl_fw_file_load(cl_hw);
> + if (ret) {
> + cl_dbg_err(cl_hw, "cl_fw_file_load failed %d\n", ret);
> + return ret;
> + }
> +
> + /* Clear CL_DEV_FW_ERROR after firmware loaded */
> + clear_bit(CL_DEV_FW_ERROR, &cl_hw->drv_flags);
> +
> + if (cl_recovery_in_progress(cl_hw))
> + cl_ipc_recovery(cl_hw);
> +
> + regval = macsys_gcu_xt_control_get(chip);
> +
> + /* Set fw to run */
> + if (cl_hw->fw_active)
> + regval &= ~cl_hw->controller_reg.run_stall;
> +
> + /* Set umac to run */
> + if (chip->umac_active)
> + regval &= ~UMAC_RUN_STALL;
> +
> + /* Ack all possibly pending IRQs */
> + ipc_xmac_2_host_ack_set(chip, cl_hw->ipc_e2a_irq.all);
> + macsys_gcu_xt_control_set(chip, regval);
> + cl_irq_enable(cl_hw, cl_hw->ipc_e2a_irq.all);
> + /*
> + * cl_irq_status_sync will set CL_DEV_FW_SYNC when fw raises IPC_IRQ_E2A_SYNC
> + * (indicate its ready to accept interrupts)
> + */
> + ret = wait_event_interruptible_timeout(cl_hw->fw_sync_wq,
> + test_and_clear_bit(CL_DEV_FW_SYNC,
> + &cl_hw->drv_flags),
> + msecs_to_jiffies(5000));
> +
> + if (ret == 0) {
> + pr_err("[%s]: FW synchronization timeout.\n", __func__);
> + cl_hw_assert_check(cl_hw);
> + ret = -ETIMEDOUT;
> + goto out_free_cached_fw;
> + } else if (ret == -ERESTARTSYS) {
> + goto out_free_cached_fw;
> + }
> +
> + return 0;
> +
> +out_free_cached_fw:
> + cl_irq_disable(cl_hw, cl_hw->ipc_e2a_irq.all);
> + cl_fw_file_release(cl_hw);
> + return ret;
> +}
> +
> +static int __cl_main_init(struct cl_hw *cl_hw)
> +{
> + int ret;
> +
> + if (!cl_hw)
> + return 0;
> +
> + if (cl_regd_init(cl_hw, cl_hw->hw->wiphy))
> + cl_dbg_err(cl_hw, "regulatory failed\n");
> +
> + /*
> + * ieee80211_register_hw() will take care of calling wiphy_register() and
> + * also ieee80211_if_add() (because IFTYPE_STATION is supported)
> + * which will internally call register_netdev()
> + */
> + ret = ieee80211_register_hw(cl_hw->hw);
> + if (ret) {
> + cl_dbg_err(cl_hw, "ieee80211_register_hw failed\n");
> + cl_main_deinit(cl_hw->chip);
> +
> + return ret;
> + }
> +
> + return ret;
> +}
> +
> +static int _cl_main_init(struct cl_hw *cl_hw)
> +{
> + int ret = 0;
> +
> + if (!cl_hw)
> + return 0;
> +
> + set_bit(CL_DEV_INIT, &cl_hw->drv_flags);
> +
> + /* By default, set FEM mode to opertional mode. */
> + cl_hw->fem_mode = FEM_MODE_OPERETIONAL;
> +
> + cl_vif_init(cl_hw);
> +
> + cl_drv_workqueue_create(cl_hw);
> +
> + init_waitqueue_head(&cl_hw->wait_queue);
> + init_waitqueue_head(&cl_hw->fw_sync_wq);
> + init_waitqueue_head(&cl_hw->radio_wait_queue);
> +
> + mutex_init(&cl_hw->dbginfo.mutex);
> + mutex_init(&cl_hw->msg_tx_mutex);
> + mutex_init(&cl_hw->set_channel_mutex);
> +
> + spin_lock_init(&cl_hw->tx_lock_agg);
> + spin_lock_init(&cl_hw->tx_lock_cfm_agg);
> + spin_lock_init(&cl_hw->tx_lock_single);
> + spin_lock_init(&cl_hw->tx_lock_bcmc);
> + spin_lock_init(&cl_hw->channel_info_lock);
> +
> + ret = cl_ipc_init(cl_hw);
> + if (ret) {
> + cl_dbg_err(cl_hw, "cl_ipc_init failed %d\n", ret);
> + return ret;
> + }
> +
> + cl_chip_set_rfic_version(cl_hw);
> +
> + /* Validate calib params should be called after setting the rfic version */
> + cl_tcv_config_validate_calib_params(cl_hw);
> +
> + cl_hw->tx_queues = vzalloc(sizeof(*cl_hw->tx_queues));
> + if (!cl_hw->tx_queues) {
> + cl_ipc_deinit(cl_hw);
> + return -ENOMEM;
> + }
> +
> + ret = cl_main_on(cl_hw);
> + if (ret) {
> + cl_dbg_err(cl_hw, "cl_main_on failed %d\n", ret);
> + cl_ipc_deinit(cl_hw);
> + vfree(cl_hw->tx_queues);
> +
> + return ret;
> + }
> +
> + ret = cl_main_alloc(cl_hw);
> + if (ret)
> + goto out_free;
> +
> + /* Reset firmware */
> + ret = cl_msg_tx_reset(cl_hw);
> + if (ret)
> + goto out_free;
> +
> + cl_calib_power_read(cl_hw);
> + cl_sta_init(cl_hw);
> + cl_sw_txhdr_init(cl_hw);
> + cl_tx_amsdu_txhdr_init(cl_hw);
> + cl_rx_init(cl_hw);
> + cl_prot_mode_init(cl_hw);
> + cl_radar_init(cl_hw);
> + cl_sounding_init(cl_hw);
> + cl_traffic_init(cl_hw);
> + ret = cl_vns_init(cl_hw);
> + if (ret)
> + goto out_free;
> +
> + cl_maintenance_init(cl_hw);
> + cl_rssi_assoc_init(cl_hw);
> + cl_agg_cfm_init(cl_hw);
> + cl_single_cfm_init(cl_hw);
> + cl_bcmc_cfm_init(cl_hw);
> +#ifdef CONFIG_CL8K_DYN_MCAST_RATE
> + cl_dyn_mcast_rate_init(cl_hw);
> +#endif /* CONFIG_CL8K_DYN_MCAST_RATE */
> +#ifdef CONFIG_CL8K_DYN_BCAST_RATE
> + cl_dyn_bcast_rate_init(cl_hw);
> +#endif /* CONFIG_CL8K_DYN_BCAST_RATE */
> + cl_wrs_api_init(cl_hw);
> + cl_dfs_init(cl_hw);
> + cl_noise_init(cl_hw);
> + ret = cl_fw_dbg_trigger_based_init(cl_hw);
> + if (ret)
> + goto out_free;
> +
> + cl_stats_init(cl_hw);
> + cl_cca_init(cl_hw);
> + cl_bf_init(cl_hw);
> +
> + ret = cl_scanner_init(cl_hw);
> + if (ret)
> + goto out_free;
> +
> + /* Start firmware */
> + ret = cl_msg_tx_start(cl_hw);
> + if (ret)
> + goto out_free;
> +
> + return 0;
> +
> +out_free:
> + cl_main_free(cl_hw);
> + vfree(cl_hw->tx_queues);
> +
> + return ret;
> +}
> +
> +int cl_main_init(struct cl_chip *chip, const struct cl_driver_ops *drv_ops)
> +{
> + int ret = 0;
> + struct cl_chip_conf *conf = chip->conf;
> +
> + /* All cores needs to be reset first (once per chip) */
> + cl_main_reset(chip, &all_controller_reg);
> +
> + /* Prepare HW for TCV0 */
> + if (cl_chip_is_tcv0_enabled(chip)) {
> + ret = cl_prepare_hw(chip, TCV0, drv_ops);
> +
> + if (ret) {
> + cl_dbg_chip_err(chip, "Prepare HW for TCV0 failed %d\n", ret);
> + return ret;
> + }
> + }
> +
> + /* Prepare HW for TCV1 */
> + if (cl_chip_is_tcv1_enabled(chip)) {
> + ret = cl_prepare_hw(chip, TCV1, drv_ops);
> +
> + if (ret) {
> + cl_dbg_chip_err(chip, "Prepare HW for TCV1 failed %d\n", ret);
> + cl_free_hw(chip->cl_hw_tcv0);
> + return ret;
> + }
> + }
> +
> + if (!conf->ci_phy_load_bootdrv &&
> + conf->ci_phy_dev != PHY_DEV_DUMMY) {
> + ret = cl_radio_boot(chip);
> + if (ret) {
> + cl_dbg_chip_err(chip, "RF boot failed %d\n", ret);
> + return ret;
> + }
> +
> + ret = cl_dsp_load_regular(chip);
> + if (ret) {
> + cl_dbg_chip_err(chip, "DSP load failed %d\n", ret);
> + return ret;
> + }
> + }
> +
> + ret = _cl_main_init(chip->cl_hw_tcv0);
> + if (ret) {
> + cl_free_chip(chip);
> + return ret;
> + }
> +
> + ret = _cl_main_init(chip->cl_hw_tcv1);
> + if (ret) {
> + _cl_main_deinit(chip->cl_hw_tcv0);
> + cl_free_chip(chip);
> + return ret;
> + }
> +
> + ret = __cl_main_init(chip->cl_hw_tcv0);
> + if (ret)
> + return ret;
> +
> + ret = __cl_main_init(chip->cl_hw_tcv1);
> + if (ret)
> + return ret;
> +
> +#ifdef CONFIG_CL8K_EEPROM_STM24256
> + if (conf->ci_calib_eeprom_en && conf->ce_production_mode && conf->ce_calib_runtime_en)
> + cl_e2p_read_eeprom_start_work(chip);
> +#endif
> +
> + return ret;
> +}


2022-05-27 12:52:45

by Johannes Berg

[permalink] [raw]
Subject: Re: [RFC v2 39/96] cl8k: add mac80211.h

On Tue, 2022-05-24 at 14:34 +0300, [email protected] wrote:
>
> +#define PPE_0US 0
> +#define PPE_8US 1
> +#define PPE_16US 2
> +
> +/*
> + * Extended Channel Switching capability to be set in the 1st byte of
> + * the @WLAN_EID_EXT_CAPABILITY information element
> + */
> +#define WLAN_EXT_CAPA1_2040_BSS_COEX_MGMT_ENABLED BIT(0)
> +
> +/* WLAN_EID_BSS_COEX_2040 = 72 */
> +/* 802.11n 7.3.2.61 */
> +struct ieee80211_bss_coex_20_40_ie {
> + u8 element_id;
> + u8 len;
> + u8 info_req : 1;
> + /* Inter-BSS set 1 when prohibits a receiving BSS from operating as a 20/40 Mhz BSS */
> + u8 intolerant40 : 1;
> + /* Intra-BSS set 1 when prohibits a receiving AP from operating its BSS as a 20/40MHz BSS */
> + u8 bss20_width_req : 1;
> + u8 obss_scan_exemp_req : 1;
> + u8 obss_scan_exemp_grant : 1;
> + u8 rsv : 3;
> +} __packed;

You should add these kinds of things to ieee80211.h, but of course they
should be endian safe and not use bitfields.


> +/* WLAN_EID_BSS_INTOLERANT_CHL_REPORT = 73 */
> +/*802.11n 7.3.2.59 */
> +struct ieee80211_bss_intolerant_chl_report_ie {
> + u8 element_id;
> + u8 len;
> + u8 regulatory_class;
> + u8 ch_list[0];

use [] not [0]

> +} __packed;
> +
> +/* Union options that are not included in 'struct ieee80211_mgmt' */

just add them

johannes

2022-07-11 23:23:23

by Viktor Barna

[permalink] [raw]
Subject: Re: [RFC v2 36/96] cl8k: add key.c

On Thu, 26 May 2022 21:38:08 +0200, [email protected] wrote:
> Why do you do this stuff in the driver if it's effectively the same as
> in mac80211?

Indeed, thanks, we will remove that. We just wanted to have replay protection
in our RX path. Another point to do something with our version is related to
the most recent WPA3SAE mesh test results, where this custom implementation
broke multicast/broadcast frames.

Best regards,
Viktor Barna

2022-07-11 23:23:29

by Viktor Barna

[permalink] [raw]
Subject: Re: [RFC v2 38/96] cl8k: add mac80211.c

On Thu, 26 May 2022 21:49:17 +0200, [email protected] wrote:
> I'm really surprised to see this callback in a modern driver - wouldn't
> you want to support some form of multi-channel operation? Even just
> using the chanctx callbacks might make some of the DFS things you have
> there easier?

That is a good point. We picked the “old” channel API, which took less time to
implement – the benefits of channel context were not clear enough during that
time.

> This feels ... odd. You really shouldn't have to look into the beacon to
> figure out these things?
>
> And SGI etc. are per-STA rate control parameters anyway? Hmm.

Information from this dynamic parsing is required for our driver and FW to
function properly.

> You have all this hardware/firmware and you implement this? Interesting
> design choice. One that I'm sure you'll revisit for WiFi 7 ;-)

Actually, the driver is doing that in cl_tx_handle_beacon_tim function as a
result of TBTT interrupt. However, the next version of the RFC will be without
that code – we are moving the routine to the FW side due to timing issues in
the multiclient setup.

Best regards,
Viktor Barna

2022-07-13 07:42:50

by Kalle Valo

[permalink] [raw]
Subject: Re: [RFC v2 04/96] cl8k: add Makefile

[email protected] writes:

> +cl-objs += \
> + wrs.o \
> + phy.o \
> + key.o \
> + sta.o \
> + hw.o \
> + chip.o \
> + fw.o \
> + utils.o \
> + channel.o \
> + rx.o \
> + tx.o \
> + main.o \
> + mac_addr.o \
> + ampdu.o \
> + dfs.o \
> + enhanced_tim.o \
> + e2p.o \
> + calib.o \
> + stats.o \
> + power.o \
> + motion_sense.o \
> + bf.o \
> + sounding.o \
> + debug.o \
> + temperature.o \
> + recovery.o \
> + rates.o \
> + radio.o \
> + config.o \
> + tcv.o \
> + traffic.o \
> + vns.o \
> + maintenance.o \
> + ela.o \
> + rfic.o \
> + vif.o \
> + dsp.o \
> + pci.o \
> + version.o \
> + regdom.o \
> + mac80211.o \
> + platform.o \
> + scan.o
> +
> +ifneq ($(CONFIG_CL8K),)
> +cl8k-y += $(cl-objs)
> +endif

I don't understand why you need ifneq here. Please check how are other
drivers (like iwlwifi) do it, Makefile can be really simple

--
https://patchwork.kernel.org/project/linux-wireless/list/

https://wireless.wiki.kernel.org/en/developers/documentation/submittingpatches