2022-11-02 22:53:02

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 0/9] net: ipa: support more endpoints

This series adds support for more than 32 IPA endpoints. To do
this, five registers whose bits represent endpoint state are
replicated as needed to represent endpoints beyond 32. For existing
platforms, the number of endpoints is never greater than 32, so
there is just one of each register. IPA v5.0+ supports more than
that though; these changes prepare the code for that.

Beyond that, the IPA fields that represent endpoints in a 32-bit
bitmask are updated to support an arbitrary number of these endpoint
registers. (There is one exception, explained in patch 7.)

The first two patches are some sort of unrelated cleanups, making
use of a helper function introduced recently.

The third and fourth use parameterized functions to determine the
register offset for registers that represent endpoints.

The last five convert fields representing endpoints to allow more
than 32 endpoints to be represented.

Since v1, I have implemented Jakub's suggestions:
- Don't print a message on (bitmap) memory allocation failure
- Do not do "mass null checks" when allocating bitmaps
- Rework some code to ensure error path is sane

-Alex

Alex Elder (9):
net: ipa: reduce arguments to ipa_table_init_add()
net: ipa: use ipa_table_mem() in ipa_table_reset_add()
net: ipa: add a parameter to aggregation registers
net: ipa: add a parameter to suspend registers
net: ipa: use a bitmap for defined endpoints
net: ipa: use a bitmap for available endpoints
net: ipa: support more filtering endpoints
net: ipa: use a bitmap for set-up endpoints
net: ipa: use a bitmap for enabled endpoints

drivers/net/ipa/ipa.h | 26 ++--
drivers/net/ipa/ipa_endpoint.c | 185 ++++++++++++++++-----------
drivers/net/ipa/ipa_endpoint.h | 2 +-
drivers/net/ipa/ipa_interrupt.c | 34 +++--
drivers/net/ipa/ipa_main.c | 7 +-
drivers/net/ipa/ipa_table.c | 91 +++++++------
drivers/net/ipa/ipa_table.h | 6 +-
drivers/net/ipa/reg/ipa_reg-v3.1.c | 13 +-
drivers/net/ipa/reg/ipa_reg-v3.5.1.c | 13 +-
drivers/net/ipa/reg/ipa_reg-v4.11.c | 13 +-
drivers/net/ipa/reg/ipa_reg-v4.2.c | 13 +-
drivers/net/ipa/reg/ipa_reg-v4.5.c | 13 +-
drivers/net/ipa/reg/ipa_reg-v4.9.c | 13 +-
13 files changed, 241 insertions(+), 188 deletions(-)

--
2.34.1



2022-11-02 22:53:25

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 9/9] net: ipa: use a bitmap for enabled endpoints

Replace the 32-bit unsigned used to track enabled endpoints with a
Linux bitmap, to allow an arbitrary number of endpoints to be
represented.

Signed-off-by: Alex Elder <[email protected]>
---
drivers/net/ipa/ipa.h | 4 ++--
drivers/net/ipa/ipa_endpoint.c | 32 ++++++++++++++++++++------------
2 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index f14d1bd34e7e5..5372db58b5bdc 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -67,7 +67,7 @@ struct ipa_interrupt;
* @available: Bitmap of endpoints supported by hardware
* @filtered: Bitmap of endpoints that support filtering
* @set_up: Bitmap of endpoints that are set up for use
- * @enabled: Bit mask indicating endpoints enabled
+ * @enabled: Bitmap of currently enabled endpoints
* @modem_tx_count: Number of defined modem TX endoints
* @endpoint: Array of endpoint information
* @channel_map: Mapping of GSI channel to IPA endpoint
@@ -125,7 +125,7 @@ struct ipa {
unsigned long *available; /* Supported by hardware */
u64 filtered; /* Support filtering (AP and modem) */
unsigned long *set_up;
- u32 enabled;
+ unsigned long *enabled;

u32 modem_tx_count;
struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX];
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 3fe20b4d9c90b..136932464261c 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1666,6 +1666,7 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)

int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
{
+ u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
struct gsi *gsi = &ipa->gsi;
int ret;
@@ -1675,37 +1676,35 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
dev_err(&ipa->pdev->dev,
"error %d starting %cX channel %u for endpoint %u\n",
ret, endpoint->toward_ipa ? 'T' : 'R',
- endpoint->channel_id, endpoint->endpoint_id);
+ endpoint->channel_id, endpoint_id);
return ret;
}

if (!endpoint->toward_ipa) {
- ipa_interrupt_suspend_enable(ipa->interrupt,
- endpoint->endpoint_id);
+ ipa_interrupt_suspend_enable(ipa->interrupt, endpoint_id);
ipa_endpoint_replenish_enable(endpoint);
}

- ipa->enabled |= BIT(endpoint->endpoint_id);
+ __set_bit(endpoint_id, ipa->enabled);

return 0;
}

void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
{
- u32 mask = BIT(endpoint->endpoint_id);
+ u32 endpoint_id = endpoint->endpoint_id;
struct ipa *ipa = endpoint->ipa;
struct gsi *gsi = &ipa->gsi;
int ret;

- if (!(ipa->enabled & mask))
+ if (!test_bit(endpoint_id, ipa->enabled))
return;

- ipa->enabled ^= mask;
+ __clear_bit(endpoint_id, endpoint->ipa->enabled);

if (!endpoint->toward_ipa) {
ipa_endpoint_replenish_disable(endpoint);
- ipa_interrupt_suspend_disable(ipa->interrupt,
- endpoint->endpoint_id);
+ ipa_interrupt_suspend_disable(ipa->interrupt, endpoint_id);
}

/* Note that if stop fails, the channel's state is not well-defined */
@@ -1713,7 +1712,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
if (ret)
dev_err(&ipa->pdev->dev,
"error %d attempting to stop endpoint %u\n", ret,
- endpoint->endpoint_id);
+ endpoint_id);
}

void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
@@ -1722,7 +1721,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
struct gsi *gsi = &endpoint->ipa->gsi;
int ret;

- if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
+ if (!test_bit(endpoint->endpoint_id, endpoint->ipa->enabled))
return;

if (!endpoint->toward_ipa) {
@@ -1742,7 +1741,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
struct gsi *gsi = &endpoint->ipa->gsi;
int ret;

- if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
+ if (!test_bit(endpoint->endpoint_id, endpoint->ipa->enabled))
return;

if (!endpoint->toward_ipa)
@@ -1971,6 +1970,8 @@ void ipa_endpoint_exit(struct ipa *ipa)
for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count)
ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]);

+ bitmap_free(ipa->enabled);
+ ipa->enabled = NULL;
bitmap_free(ipa->set_up);
ipa->set_up = NULL;
bitmap_free(ipa->defined);
@@ -2003,6 +2004,10 @@ int ipa_endpoint_init(struct ipa *ipa, u32 count,
if (!ipa->set_up)
goto err_free_defined;

+ ipa->enabled = bitmap_zalloc(ipa->endpoint_count, GFP_KERNEL);
+ if (!ipa->enabled)
+ goto err_free_set_up;
+
filtered = 0;
for (name = 0; name < count; name++, data++) {
if (ipa_gsi_endpoint_data_empty(data))
@@ -2027,6 +2032,9 @@ int ipa_endpoint_init(struct ipa *ipa, u32 count,

return 0;

+err_free_set_up:
+ bitmap_free(ipa->set_up);
+ ipa->set_up = NULL;
err_free_defined:
bitmap_free(ipa->defined);
ipa->defined = NULL;
--
2.34.1


2022-11-02 22:53:40

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 2/9] net: ipa: use ipa_table_mem() in ipa_table_reset_add()

Similar to the previous commit, pass flags rather than a memory
region ID to ipa_table_reset_add(), and there use ipa_table_mem() to
look up the memory region affected based on those flags.

Currently all eight of these table memory regions are assumed to
exist, because they all have canaries within them. Stop assuming
that will always be the case, and in ipa_table_reset_add() allow
these memory regions to be non-existent.

Signed-off-by: Alex Elder <[email protected]>
---
drivers/net/ipa/ipa_table.c | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index 94bb7611e574b..3a14465bf8a64 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -200,16 +200,17 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
}

static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
- u16 first, u16 count, enum ipa_mem_id mem_id)
+ bool hashed, bool ipv6, u16 first, u16 count)
{
struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
- const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
+ const struct ipa_mem *mem;
dma_addr_t addr;
u32 offset;
u16 size;

- /* Nothing to do if the table memory region is empty */
- if (!mem->size)
+ /* Nothing to do if the memory region is doesn't exist or is empty */
+ mem = ipa_table_mem(ipa, filter, hashed, ipv6);
+ if (!mem || !mem->size)
return;

if (filter)
@@ -227,7 +228,7 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
* for the IPv4 and IPv6 non-hashed and hashed filter tables.
*/
static int
-ipa_filter_reset_table(struct ipa *ipa, enum ipa_mem_id mem_id, bool modem)
+ipa_filter_reset_table(struct ipa *ipa, bool hashed, bool ipv6, bool modem)
{
u32 ep_mask = ipa->filter_map;
u32 count = hweight32(ep_mask);
@@ -253,7 +254,7 @@ ipa_filter_reset_table(struct ipa *ipa, enum ipa_mem_id mem_id, bool modem)
if (endpoint->ee_id != ee_id)
continue;

- ipa_table_reset_add(trans, true, endpoint_id, 1, mem_id);
+ ipa_table_reset_add(trans, true, hashed, ipv6, endpoint_id, 1);
}

gsi_trans_commit_wait(trans);
@@ -269,18 +270,18 @@ static int ipa_filter_reset(struct ipa *ipa, bool modem)
{
int ret;

- ret = ipa_filter_reset_table(ipa, IPA_MEM_V4_FILTER, modem);
+ ret = ipa_filter_reset_table(ipa, false, false, modem);
if (ret)
return ret;

- ret = ipa_filter_reset_table(ipa, IPA_MEM_V4_FILTER_HASHED, modem);
+ ret = ipa_filter_reset_table(ipa, true, false, modem);
if (ret)
return ret;

- ret = ipa_filter_reset_table(ipa, IPA_MEM_V6_FILTER, modem);
+ ret = ipa_filter_reset_table(ipa, false, true, modem);
if (ret)
return ret;
- ret = ipa_filter_reset_table(ipa, IPA_MEM_V6_FILTER_HASHED, modem);
+ ret = ipa_filter_reset_table(ipa, true, true, modem);

return ret;
}
@@ -312,13 +313,11 @@ static int ipa_route_reset(struct ipa *ipa, bool modem)
count = ipa->route_count - modem_route_count;
}

- ipa_table_reset_add(trans, false, first, count, IPA_MEM_V4_ROUTE);
- ipa_table_reset_add(trans, false, first, count,
- IPA_MEM_V4_ROUTE_HASHED);
+ ipa_table_reset_add(trans, false, false, false, first, count);
+ ipa_table_reset_add(trans, false, true, false, first, count);

- ipa_table_reset_add(trans, false, first, count, IPA_MEM_V6_ROUTE);
- ipa_table_reset_add(trans, false, first, count,
- IPA_MEM_V6_ROUTE_HASHED);
+ ipa_table_reset_add(trans, false, false, true, first, count);
+ ipa_table_reset_add(trans, false, true, true, first, count);

gsi_trans_commit_wait(trans);

--
2.34.1


2022-11-02 22:53:42

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 3/9] net: ipa: add a parameter to aggregation registers

Starting with IPA v5.0, a single IPA instance can have more than 32
endpoints defined. To handle this, each register that holds a
bitmap of IPA endpoints is replicated as needed to represent the
available endpoints.

To prepare for this, registers that represent endpoint IDs in a bit
mask will be defined to have a parameter, with a stride value of 4
bytes. The first 32 endpoints are represented in the first 32-bit
register, then the next (up to) 32 endpoints at an offset 4 bytes
higher. When accessing such a register, the endpoint ID divided
by 32 determines the offset, and the endpoint ID modulo 32 defines
the endpoint's bit position within the register.

The first two registers we'll update for this are STATE_AGGR_ACTIVE
and AGGR_FORCE_CLOSE.

Until more than 32 endpoints are supported, this change has no
practical effect.

Signed-off-by: Alex Elder <[email protected]>
---
drivers/net/ipa/ipa_endpoint.c | 14 ++++++++++----
drivers/net/ipa/reg/ipa_reg-v3.1.c | 4 ++--
drivers/net/ipa/reg/ipa_reg-v3.5.1.c | 4 ++--
drivers/net/ipa/reg/ipa_reg-v4.11.c | 4 ++--
drivers/net/ipa/reg/ipa_reg-v4.2.c | 4 ++--
drivers/net/ipa/reg/ipa_reg-v4.5.c | 4 ++--
drivers/net/ipa/reg/ipa_reg-v4.9.c | 4 ++--
7 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 2a6184ea8f5ca..32559ed498c19 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -350,29 +350,35 @@ ipa_endpoint_program_delay(struct ipa_endpoint *endpoint, bool enable)

static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint)
{
- u32 mask = BIT(endpoint->endpoint_id);
+ u32 endpoint_id = endpoint->endpoint_id;
+ u32 mask = BIT(endpoint_id % 32);
struct ipa *ipa = endpoint->ipa;
+ u32 unit = endpoint_id / 32;
const struct ipa_reg *reg;
u32 val;

+ /* This works until we actually have more than 32 endpoints */
WARN_ON(!(mask & ipa->available));

reg = ipa_reg(ipa, STATE_AGGR_ACTIVE);
- val = ioread32(ipa->reg_virt + ipa_reg_offset(reg));
+ val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit));

return !!(val & mask);
}

static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint)
{
- u32 mask = BIT(endpoint->endpoint_id);
+ u32 endpoint_id = endpoint->endpoint_id;
+ u32 mask = BIT(endpoint_id % 32);
struct ipa *ipa = endpoint->ipa;
+ u32 unit = endpoint_id / 32;
const struct ipa_reg *reg;

+ /* This works until we actually have more than 32 endpoints */
WARN_ON(!(mask & ipa->available));

reg = ipa_reg(ipa, AGGR_FORCE_CLOSE);
- iowrite32(mask, ipa->reg_virt + ipa_reg_offset(reg));
+ iowrite32(mask, ipa->reg_virt + ipa_reg_n_offset(reg, unit));
}

/**
diff --git a/drivers/net/ipa/reg/ipa_reg-v3.1.c b/drivers/net/ipa/reg/ipa_reg-v3.1.c
index 0d002c3c38a26..0b6edc2912bd3 100644
--- a/drivers/net/ipa/reg/ipa_reg-v3.1.c
+++ b/drivers/net/ipa/reg/ipa_reg-v3.1.c
@@ -103,7 +103,7 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090);

/* Valid bits defined by ipa->available */
-IPA_REG(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c);
+IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004);

IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0);

@@ -116,7 +116,7 @@ static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);

/* Valid bits defined by ipa->available */
-IPA_REG(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec);
+IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);

static const u32 ipa_reg_counter_cfg_fmask[] = {
[EOT_COAL_GRANULARITY] = GENMASK(3, 0),
diff --git a/drivers/net/ipa/reg/ipa_reg-v3.5.1.c b/drivers/net/ipa/reg/ipa_reg-v3.5.1.c
index 6e2f939b18f19..10f62f6aaf7a4 100644
--- a/drivers/net/ipa/reg/ipa_reg-v3.5.1.c
+++ b/drivers/net/ipa/reg/ipa_reg-v3.5.1.c
@@ -108,7 +108,7 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090);

/* Valid bits defined by ipa->available */
-IPA_REG(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c);
+IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004);

IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0);

@@ -121,7 +121,7 @@ static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);

/* Valid bits defined by ipa->available */
-IPA_REG(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec);
+IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);

static const u32 ipa_reg_counter_cfg_fmask[] = {
/* Bits 0-3 reserved */
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.11.c b/drivers/net/ipa/reg/ipa_reg-v4.11.c
index 8fd36569bb9f8..113a25c006da1 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.11.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.11.c
@@ -140,7 +140,7 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);

/* Valid bits defined by ipa->available */
-IPA_REG(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4);
+IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);

static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
@@ -151,7 +151,7 @@ static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);

/* Valid bits defined by ipa->available */
-IPA_REG(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec);
+IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);

static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.2.c b/drivers/net/ipa/reg/ipa_reg-v4.2.c
index f8e78e1907c83..c93f2da9290fc 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.2.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.2.c
@@ -132,7 +132,7 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);

/* Valid bits defined by ipa->available */
-IPA_REG(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4);
+IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);

IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0);

@@ -145,7 +145,7 @@ static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);

/* Valid bits defined by ipa->available */
-IPA_REG(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec);
+IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);

static const u32 ipa_reg_counter_cfg_fmask[] = {
/* Bits 0-3 reserved */
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.5.c b/drivers/net/ipa/reg/ipa_reg-v4.5.c
index d32b805abb11a..1615c5ead8cc1 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.5.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.5.c
@@ -134,7 +134,7 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);

/* Valid bits defined by ipa->available */
-IPA_REG(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4);
+IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);

static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
@@ -145,7 +145,7 @@ static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);

/* Valid bits defined by ipa->available */
-IPA_REG(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec);
+IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);

static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
diff --git a/drivers/net/ipa/reg/ipa_reg-v4.9.c b/drivers/net/ipa/reg/ipa_reg-v4.9.c
index eabbc5451937b..4efc890d31589 100644
--- a/drivers/net/ipa/reg/ipa_reg-v4.9.c
+++ b/drivers/net/ipa/reg/ipa_reg-v4.9.c
@@ -139,7 +139,7 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = {
IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c);

/* Valid bits defined by ipa->available */
-IPA_REG(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4);
+IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004);

static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
[IPA_BASE_ADDR] = GENMASK(17, 0),
@@ -150,7 +150,7 @@ static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = {
IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8);

/* Valid bits defined by ipa->available */
-IPA_REG(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec);
+IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004);

static const u32 ipa_reg_ipa_tx_cfg_fmask[] = {
/* Bits 0-1 reserved */
--
2.34.1


2022-11-02 22:54:36

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 7/9] net: ipa: support more filtering endpoints

Prior to IPA v5.0, there could be no more than 32 endpoints.

A filter table begins with a bitmap indicating which endpoints have
a filter defined. That bitmap is currently assumed to fit in a
32-bit value.

Starting with IPA v5.0, more than 32 endpoints are supported, so
it's conceivable that a TX endpoint has an ID that exceeds 32.
Increase the size of the field representing endpoints that support
filtering to 64 bits. Rename the bitmap field "filtered".

Unlike other similar fields, we do not use an (arbitrarily long)
Linux bitmap for this purpose. The reason is that if a filter table
ever *did* need to support more than 64 TX endpoints, its format
would change in ways we can't anticipate.

Have ipa_endpoint_init() return a negative errno rather than a mask
that indicates which endpoints support filtering, and have that
function assign the "filtered" field directly.

Signed-off-by: Alex Elder <[email protected]>
---
v2: - Don't print a message on bitmap allocation failure
- Clean up error handling in ipa_endpoint_init()

drivers/net/ipa/ipa.h | 4 ++--
drivers/net/ipa/ipa_endpoint.c | 22 +++++++++++++---------
drivers/net/ipa/ipa_endpoint.h | 2 +-
drivers/net/ipa/ipa_main.c | 7 ++-----
drivers/net/ipa/ipa_table.c | 23 +++++++++++------------
drivers/net/ipa/ipa_table.h | 6 +++---
6 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index c603575e2a58b..557101c2d5838 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -65,7 +65,7 @@ struct ipa_interrupt;
* @available_count: Number of defined bits in the available bitmap
* @defined: Bitmap of endpoints defined in config data
* @available: Bitmap of endpoints supported by hardware
- * @filter_map: Bit mask indicating endpoints that support filtering
+ * @filtered: Bitmap of endpoints that support filtering
* @set_up: Bit mask indicating endpoints set up
* @enabled: Bit mask indicating endpoints enabled
* @modem_tx_count: Number of defined modem TX endoints
@@ -123,7 +123,7 @@ struct ipa {
u32 available_count;
unsigned long *defined; /* Defined in configuration data */
unsigned long *available; /* Supported by hardware */
- u32 filter_map;
+ u64 filtered; /* Support filtering (AP and modem) */
u32 set_up;
u32 enabled;

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index a7932e8d0b2bf..03811871dc4aa 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1973,6 +1973,8 @@ void ipa_endpoint_exit(struct ipa *ipa)
{
u32 endpoint_id;

+ ipa->filtered = 0;
+
for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count)
ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]);

@@ -1984,25 +1986,25 @@ void ipa_endpoint_exit(struct ipa *ipa)
}

/* Returns a bitmask of endpoints that support filtering, or 0 on error */
-u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
+int ipa_endpoint_init(struct ipa *ipa, u32 count,
const struct ipa_gsi_endpoint_data *data)
{
enum ipa_endpoint_name name;
- u32 filter_map;
+ u32 filtered;

BUILD_BUG_ON(!IPA_REPLENISH_BATCH);

/* Number of endpoints is one more than the maximum ID */
ipa->endpoint_count = ipa_endpoint_max(ipa, count, data) + 1;
if (!ipa->endpoint_count)
- return 0; /* Error */
+ return -EINVAL;

/* Initialize the defined endpoint bitmap */
ipa->defined = bitmap_zalloc(ipa->endpoint_count, GFP_KERNEL);
if (!ipa->defined)
- return 0; /* Error */
+ return -ENOMEM;

- filter_map = 0;
+ filtered = 0;
for (name = 0; name < count; name++, data++) {
if (ipa_gsi_endpoint_data_empty(data))
continue; /* Skip over empty slots */
@@ -2010,18 +2012,20 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
ipa_endpoint_init_one(ipa, name, data);

if (data->endpoint.filter_support)
- filter_map |= BIT(data->endpoint_id);
+ filtered |= BIT(data->endpoint_id);
if (data->ee_id == GSI_EE_MODEM && data->toward_ipa)
ipa->modem_tx_count++;
}

- if (!ipa_filter_map_valid(ipa, filter_map))
+ if (!ipa_filtered_valid(ipa, filtered))
goto err_endpoint_exit;

- return filter_map; /* Non-zero bitmask */
+ ipa->filtered = filtered;
+
+ return 0;

err_endpoint_exit:
ipa_endpoint_exit(ipa);

- return 0; /* Error */
+ return -EINVAL;
}
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index d8dfa24f52140..4a5c3bc549df5 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -195,7 +195,7 @@ void ipa_endpoint_deconfig(struct ipa *ipa);
void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id);
void ipa_endpoint_default_route_clear(struct ipa *ipa);

-u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
+int ipa_endpoint_init(struct ipa *ipa, u32 count,
const struct ipa_gsi_endpoint_data *data);
void ipa_endpoint_exit(struct ipa *ipa);

diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index 8f6a6890697e8..ebb6c9b311eb9 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -788,12 +788,9 @@ static int ipa_probe(struct platform_device *pdev)
goto err_mem_exit;

/* Result is a non-zero mask of endpoints that support filtering */
- ipa->filter_map = ipa_endpoint_init(ipa, data->endpoint_count,
- data->endpoint_data);
- if (!ipa->filter_map) {
- ret = -EINVAL;
+ ret = ipa_endpoint_init(ipa, data->endpoint_count, data->endpoint_data);
+ if (ret)
goto err_gsi_exit;
- }

ret = ipa_table_init(ipa);
if (ret)
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index 3a14465bf8a64..cc9349a1d4df9 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -161,20 +161,20 @@ ipa_table_mem(struct ipa *ipa, bool filter, bool hashed, bool ipv6)
return ipa_mem_find(ipa, mem_id);
}

-bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_map)
+bool ipa_filtered_valid(struct ipa *ipa, u64 filtered)
{
struct device *dev = &ipa->pdev->dev;
u32 count;

- if (!filter_map) {
+ if (!filtered) {
dev_err(dev, "at least one filtering endpoint is required\n");

return false;
}

- count = hweight32(filter_map);
+ count = hweight64(filtered);
if (count > ipa->filter_count) {
- dev_err(dev, "too many filtering endpoints (%u, max %u)\n",
+ dev_err(dev, "too many filtering endpoints (%u > %u)\n",
count, ipa->filter_count);

return false;
@@ -230,12 +230,11 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
static int
ipa_filter_reset_table(struct ipa *ipa, bool hashed, bool ipv6, bool modem)
{
- u32 ep_mask = ipa->filter_map;
- u32 count = hweight32(ep_mask);
+ u64 ep_mask = ipa->filtered;
struct gsi_trans *trans;
enum gsi_ee_id ee_id;

- trans = ipa_cmd_trans_alloc(ipa, count);
+ trans = ipa_cmd_trans_alloc(ipa, hweight64(ep_mask));
if (!trans) {
dev_err(&ipa->pdev->dev,
"no transaction for %s filter reset\n",
@@ -405,7 +404,7 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter, bool ipv6)
* to hold the bitmap itself. The size of the hashed filter
* table is either the same as the non-hashed one, or zero.
*/
- count = 1 + hweight32(ipa->filter_map);
+ count = 1 + hweight64(ipa->filtered);
hash_count = hash_mem && hash_mem->size ? count : 0;
} else {
/* The size of a route table region determines the number
@@ -503,7 +502,7 @@ static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint)
static void ipa_filter_config(struct ipa *ipa, bool modem)
{
enum gsi_ee_id ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP;
- u32 ep_mask = ipa->filter_map;
+ u64 ep_mask = ipa->filtered;

if (!ipa_table_hash_support(ipa))
return;
@@ -615,7 +614,7 @@ bool ipa_table_mem_valid(struct ipa *ipa, bool filter)
/* Filter tables must able to hold the endpoint bitmap plus
* an entry for each endpoint that supports filtering
*/
- if (count < 1 + hweight32(ipa->filter_map))
+ if (count < 1 + hweight64(ipa->filtered))
return false;
} else {
/* Routing tables must be able to hold all modem entries,
@@ -720,9 +719,9 @@ int ipa_table_init(struct ipa *ipa)
* that option, so there's no shifting required.
*/
if (ipa->version < IPA_VERSION_5_0)
- *virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
+ *virt++ = cpu_to_le64(ipa->filtered << 1);
else
- *virt++ = cpu_to_le64((u64)ipa->filter_map);
+ *virt++ = cpu_to_le64(ipa->filtered);

/* All the rest contain the DMA address of the zero rule */
le_addr = cpu_to_le64(addr);
diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
index 8a4dcd7df4c0f..7cc951904bb48 100644
--- a/drivers/net/ipa/ipa_table.h
+++ b/drivers/net/ipa/ipa_table.h
@@ -11,13 +11,13 @@
struct ipa;

/**
- * ipa_filter_map_valid() - Validate a filter table endpoint bitmap
+ * ipa_filtered_valid() - Validate a filter table endpoint bitmap
* @ipa: IPA pointer
- * @filter_mask: Filter table endpoint bitmap to check
+ * @filtered: Filter table endpoint bitmap to check
*
* Return: true if all regions are valid, false otherwise
*/
-bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
+bool ipa_filtered_valid(struct ipa *ipa, u64 filtered);

/**
* ipa_table_hash_support() - Return true if hashed tables are supported
--
2.34.1


2022-11-02 22:54:52

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 1/9] net: ipa: reduce arguments to ipa_table_init_add()

Recently ipa_table_mem() was added as a way to look up one of 8
possible memory regions by indicating whether it was a filter or
route table, hashed or not, and IPv6 or not.

We can simplify the interface to ipa_table_init_add() by passing two
flags to it instead of the opcode and both hashed and non-hashed
memory region IDs. The "filter" and "ipv6" flags are sufficient to
determine the opcode to use, and with ipa_table_mem() can look up
the correct memory region as well.

It's possible to not have hashed tables, but we already verify the
number of entries in a filter or routing table is nonzero. Stop
assuming a hashed table entry exists in ipa_table_init_add().

Signed-off-by: Alex Elder <[email protected]>
---
drivers/net/ipa/ipa_table.c | 37 ++++++++++++++++++-------------------
1 file changed, 18 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index cf3a3de239dc3..94bb7611e574b 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -376,14 +376,12 @@ int ipa_table_hash_flush(struct ipa *ipa)
return 0;
}

-static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
- enum ipa_cmd_opcode opcode,
- enum ipa_mem_id mem_id,
- enum ipa_mem_id hash_mem_id)
+static void ipa_table_init_add(struct gsi_trans *trans, bool filter, bool ipv6)
{
struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
- const struct ipa_mem *hash_mem = ipa_mem_find(ipa, hash_mem_id);
- const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
+ const struct ipa_mem *hash_mem;
+ enum ipa_cmd_opcode opcode;
+ const struct ipa_mem *mem;
dma_addr_t hash_addr;
dma_addr_t addr;
u32 zero_offset;
@@ -393,6 +391,14 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
u16 count;
u16 size;

+ opcode = filter ? ipv6 ? IPA_CMD_IP_V6_FILTER_INIT
+ : IPA_CMD_IP_V4_FILTER_INIT
+ : ipv6 ? IPA_CMD_IP_V6_ROUTING_INIT
+ : IPA_CMD_IP_V4_ROUTING_INIT;
+
+ mem = ipa_table_mem(ipa, filter, false, ipv6);
+ hash_mem = ipa_table_mem(ipa, filter, true, ipv6);
+
/* Compute the number of table entries to initialize */
if (filter) {
/* The number of filtering endpoints determines number of
@@ -401,13 +407,13 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
* table is either the same as the non-hashed one, or zero.
*/
count = 1 + hweight32(ipa->filter_map);
- hash_count = hash_mem->size ? count : 0;
+ hash_count = hash_mem && hash_mem->size ? count : 0;
} else {
/* The size of a route table region determines the number
* of entries it has.
*/
count = mem->size / sizeof(__le64);
- hash_count = hash_mem->size / sizeof(__le64);
+ hash_count = hash_mem && hash_mem->size / sizeof(__le64);
}
size = count * sizeof(__le64);
hash_size = hash_count * sizeof(__le64);
@@ -458,17 +464,10 @@ int ipa_table_setup(struct ipa *ipa)
return -EBUSY;
}

- ipa_table_init_add(trans, false, IPA_CMD_IP_V4_ROUTING_INIT,
- IPA_MEM_V4_ROUTE, IPA_MEM_V4_ROUTE_HASHED);
-
- ipa_table_init_add(trans, false, IPA_CMD_IP_V6_ROUTING_INIT,
- IPA_MEM_V6_ROUTE, IPA_MEM_V6_ROUTE_HASHED);
-
- ipa_table_init_add(trans, true, IPA_CMD_IP_V4_FILTER_INIT,
- IPA_MEM_V4_FILTER, IPA_MEM_V4_FILTER_HASHED);
-
- ipa_table_init_add(trans, true, IPA_CMD_IP_V6_FILTER_INIT,
- IPA_MEM_V6_FILTER, IPA_MEM_V6_FILTER_HASHED);
+ ipa_table_init_add(trans, false, false);
+ ipa_table_init_add(trans, false, true);
+ ipa_table_init_add(trans, true, false);
+ ipa_table_init_add(trans, true, true);

gsi_trans_commit_wait(trans);

--
2.34.1


2022-11-02 23:10:00

by Alex Elder

[permalink] [raw]
Subject: [PATCH net-next v2 6/9] net: ipa: use a bitmap for available endpoints

Similar to the previous patch, replace the 32-bit unsigned used to
track endpoints supported by hardware with a Linux bitmap, to allow
an arbitrary number of endpoints to be represented.

Move ipa_endpoint_deconfig() above ipa_endpoint_config() and use
it in the error path of the latter function.

Signed-off-by: Alex Elder <[email protected]>
---
v2: - Use ipa_endpoint_deconfig() in ipa_endpoint_config() cleanup

drivers/net/ipa/ipa.h | 8 ++++--
drivers/net/ipa/ipa_endpoint.c | 49 ++++++++++++++++++++++-----------
drivers/net/ipa/ipa_interrupt.c | 8 ++++--
3 files changed, 43 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index 261c7263f9e31..c603575e2a58b 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -61,9 +61,10 @@ struct ipa_interrupt;
* @zero_addr: DMA address of preallocated zero-filled memory
* @zero_virt: Virtual address of preallocated zero-filled memory
* @zero_size: Size (bytes) of preallocated zero-filled memory
- * @endpoint_count: Number of endpoints represented by bit masks below
+ * @endpoint_count: Number of defined bits in most bitmaps below
+ * @available_count: Number of defined bits in the available bitmap
* @defined: Bitmap of endpoints defined in config data
- * @available: Bit mask indicating endpoints hardware supports
+ * @available: Bitmap of endpoints supported by hardware
* @filter_map: Bit mask indicating endpoints that support filtering
* @set_up: Bit mask indicating endpoints set up
* @enabled: Bit mask indicating endpoints enabled
@@ -119,8 +120,9 @@ struct ipa {

/* Bitmaps indicating endpoint state */
u32 endpoint_count;
+ u32 available_count;
unsigned long *defined; /* Defined in configuration data */
- u32 available; /* Supported by hardware */
+ unsigned long *available; /* Supported by hardware */
u32 filter_map;
u32 set_up;
u32 enabled;
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index abc939c272b5a..a7932e8d0b2bf 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -351,19 +351,17 @@ ipa_endpoint_program_delay(struct ipa_endpoint *endpoint, bool enable)
static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint)
{
u32 endpoint_id = endpoint->endpoint_id;
- u32 mask = BIT(endpoint_id % 32);
struct ipa *ipa = endpoint->ipa;
u32 unit = endpoint_id / 32;
const struct ipa_reg *reg;
u32 val;

- /* This works until we actually have more than 32 endpoints */
- WARN_ON(!(mask & ipa->available));
+ WARN_ON(!test_bit(endpoint_id, ipa->available));

reg = ipa_reg(ipa, STATE_AGGR_ACTIVE);
val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit));

- return !!(val & mask);
+ return !!(val & BIT(endpoint_id % 32));
}

static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint)
@@ -374,8 +372,7 @@ static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint)
u32 unit = endpoint_id / 32;
const struct ipa_reg *reg;

- /* This works until we actually have more than 32 endpoints */
- WARN_ON(!(mask & ipa->available));
+ WARN_ON(!test_bit(endpoint_id, ipa->available));

reg = ipa_reg(ipa, AGGR_FORCE_CLOSE);
iowrite32(mask, ipa->reg_virt + ipa_reg_n_offset(reg, unit));
@@ -1841,6 +1838,13 @@ void ipa_endpoint_teardown(struct ipa *ipa)
ipa->set_up = 0;
}

+void ipa_endpoint_deconfig(struct ipa *ipa)
+{
+ ipa->available_count = 0;
+ bitmap_free(ipa->available);
+ ipa->available = NULL;
+}
+
int ipa_endpoint_config(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
@@ -1863,7 +1867,13 @@ int ipa_endpoint_config(struct ipa *ipa)
* assume the configuration is valid.
*/
if (ipa->version < IPA_VERSION_3_5) {
- ipa->available = ~0;
+ ipa->available = bitmap_zalloc(IPA_ENDPOINT_MAX, GFP_KERNEL);
+ if (!ipa->available)
+ return -ENOMEM;
+ ipa->available_count = IPA_ENDPOINT_MAX;
+
+ bitmap_set(ipa->available, 0, IPA_ENDPOINT_MAX);
+
return 0;
}

@@ -1885,8 +1895,15 @@ int ipa_endpoint_config(struct ipa *ipa)
return -EINVAL;
}

+ /* Allocate and initialize the available endpoint bitmap */
+ ipa->available = bitmap_zalloc(limit, GFP_KERNEL);
+ if (!ipa->available)
+ return -ENOMEM;
+ ipa->available_count = limit;
+
/* Mark all supported RX and TX endpoints as available */
- ipa->available = GENMASK(limit - 1, rx_base) | GENMASK(tx_count - 1, 0);
+ bitmap_set(ipa->available, 0, tx_count);
+ bitmap_set(ipa->available, rx_base, rx_count);

for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) {
struct ipa_endpoint *endpoint;
@@ -1894,13 +1911,13 @@ int ipa_endpoint_config(struct ipa *ipa)
if (endpoint_id >= limit) {
dev_err(dev, "invalid endpoint id, %u > %u\n",
endpoint_id, limit - 1);
- return -EINVAL;
+ goto err_free_bitmap;
}

- if (!(BIT(endpoint_id) & ipa->available)) {
+ if (!test_bit(endpoint_id, ipa->available)) {
dev_err(dev, "unavailable endpoint id %u\n",
endpoint_id);
- return -EINVAL;
+ goto err_free_bitmap;
}

/* Make sure it's pointing in the right direction */
@@ -1913,15 +1930,15 @@ int ipa_endpoint_config(struct ipa *ipa)
}

dev_err(dev, "endpoint id %u wrong direction\n", endpoint_id);
- return -EINVAL;
+ goto err_free_bitmap;
}

return 0;
-}

-void ipa_endpoint_deconfig(struct ipa *ipa)
-{
- ipa->available = 0; /* Nothing more to do */
+err_free_bitmap:
+ ipa_endpoint_deconfig(ipa);
+
+ return -EINVAL;
}

static void ipa_endpoint_init_one(struct ipa *ipa, enum ipa_endpoint_name name,
diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index a62bc667bda0e..a49f66efacb87 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -132,14 +132,13 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
u32 endpoint_id, bool enable)
{
struct ipa *ipa = interrupt->ipa;
- u32 mask = BIT(endpoint_id % 32);
u32 unit = endpoint_id / 32;
const struct ipa_reg *reg;
u32 offset;
+ u32 mask;
u32 val;

- /* This works until we actually have more than 32 endpoints */
- WARN_ON(!(mask & ipa->available));
+ WARN_ON(!test_bit(endpoint_id, ipa->available));

/* IPA version 3.0 does not support TX_SUSPEND interrupt control */
if (ipa->version == IPA_VERSION_3_0)
@@ -148,10 +147,13 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
reg = ipa_reg(ipa, IRQ_SUSPEND_EN);
offset = ipa_reg_n_offset(reg, unit);
val = ioread32(ipa->reg_virt + offset);
+
+ mask = BIT(endpoint_id);
if (enable)
val |= mask;
else
val &= ~mask;
+
iowrite32(val, ipa->reg_virt + offset);
}

--
2.34.1


2022-11-04 11:38:14

by patchwork-bot+netdevbpf

[permalink] [raw]
Subject: Re: [PATCH net-next v2 0/9] net: ipa: support more endpoints

Hello:

This series was applied to netdev/net-next.git (master)
by David S. Miller <[email protected]>:

On Wed, 2 Nov 2022 17:11:30 -0500 you wrote:
> This series adds support for more than 32 IPA endpoints. To do
> this, five registers whose bits represent endpoint state are
> replicated as needed to represent endpoints beyond 32. For existing
> platforms, the number of endpoints is never greater than 32, so
> there is just one of each register. IPA v5.0+ supports more than
> that though; these changes prepare the code for that.
>
> [...]

Here is the summary with links:
- [net-next,v2,1/9] net: ipa: reduce arguments to ipa_table_init_add()
https://git.kernel.org/netdev/net-next/c/5cb76899fb47
- [net-next,v2,2/9] net: ipa: use ipa_table_mem() in ipa_table_reset_add()
https://git.kernel.org/netdev/net-next/c/6337b147828b
- [net-next,v2,3/9] net: ipa: add a parameter to aggregation registers
https://git.kernel.org/netdev/net-next/c/1d8f16dbdf36
- [net-next,v2,4/9] net: ipa: add a parameter to suspend registers
(no matching commit)
- [net-next,v2,5/9] net: ipa: use a bitmap for defined endpoints
(no matching commit)
- [net-next,v2,6/9] net: ipa: use a bitmap for available endpoints
https://git.kernel.org/netdev/net-next/c/88de7672404d
- [net-next,v2,7/9] net: ipa: support more filtering endpoints
https://git.kernel.org/netdev/net-next/c/0f97fbd47858
- [net-next,v2,8/9] net: ipa: use a bitmap for set-up endpoints
https://git.kernel.org/netdev/net-next/c/ae5108e9b7fa
- [net-next,v2,9/9] net: ipa: use a bitmap for enabled endpoints
https://git.kernel.org/netdev/net-next/c/9b7a00653651

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html