2022-06-10 10:48:46

by Sriram R

[permalink] [raw]
Subject: [PATCH 0/3] Mesh Fast xmit support

Currently Fast xmit is supported for AP, STA and other device types where
the destination doesn't change for the lifetime of its association by
caching the static parts of the header that can be reused directly for
every Tx such as addresses and updates the mutable header fields such as
PN. This technique is not directly applicable for a Mesh device type
due to the dynamic nature of the topology and protocol. The header is
built based on the destination mesh device which is proxying a certain
external device and based on the Mesh destination the next hop changes.
And the RA/A1 which is the next hop for reaching the destination can
vary during runtime as per the best route based on airtime. To
accommodate these changes and to come up with a solution to avoid
overhead during header generation, the headers comprising the MAC, Mesh
and LLC part are cached whenever data for a certain external destination
is sent. This cached header is reused every time a data is sent to that
external destination.

To ensure the changes in network are reflected in these cached headers,
the Mesh Proxy path table and Mesh path table changes are monitored
and corresponding headers are updated or flushed as applicable so that
the header used for a frame towards a certain destination is valid.

Old headers are flushed by the mesh housekeeping timers and based on the
cache size.

Only 6addr frame headers are cached currently.

Tested with ath11k driver.

Sriram R (3):
cfg80211: increase mesh config attribute bitmask size
cfg80211: Add provision for changing mesh header cache size
mac80211: Mesh Fast xmit support

include/net/cfg80211.h | 5 +-
include/uapi/linux/nl80211.h | 4 +
net/mac80211/cfg.c | 6 +-
net/mac80211/debugfs_netdev.c | 3 +
net/mac80211/ieee80211_i.h | 20 +++
net/mac80211/mesh.c | 2 +
net/mac80211/mesh.h | 45 +++++
net/mac80211/mesh_hwmp.c | 8 +-
net/mac80211/mesh_pathtbl.c | 396 ++++++++++++++++++++++++++++++++++++++++++
net/mac80211/rx.c | 9 +-
net/mac80211/tx.c | 90 ++++++++++
net/wireless/mesh.c | 3 +
net/wireless/nl80211.c | 12 +-
net/wireless/rdev-ops.h | 2 +-
net/wireless/trace.h | 6 +-
15 files changed, 596 insertions(+), 15 deletions(-)

--
2.7.4


2022-06-10 10:49:20

by Sriram R

[permalink] [raw]
Subject: [PATCH 3/3] mac80211: Mesh Fast xmit support

Support Fast xmit for mesh device type by caching the header
corresponding to the ethernet DA and reusing the cached header (mac,
mesh, llc) everytime the packet is intended for that DA. This will
avoid multiple path table lookups during header generation for a mesh
packet tx.

Freshness of the header is verified by identifying change in mesh paths
before using the header and corresponding changes to the header and
cache entry is done on the fly.

Mutable fields of the header such as eaddr2/SA, tid, mesh SN, PN are
updated for each xmit.

Each cache entry is ~100Bytes, least used/expired entries are
periodically removed when cache gets almost full.

Signed-off-by: Sriram R <[email protected]>
---
net/mac80211/cfg.c | 2 +
net/mac80211/debugfs_netdev.c | 3 +
net/mac80211/ieee80211_i.h | 20 +++
net/mac80211/mesh.c | 2 +
net/mac80211/mesh.h | 45 +++++
net/mac80211/mesh_hwmp.c | 8 +-
net/mac80211/mesh_pathtbl.c | 396 ++++++++++++++++++++++++++++++++++++++++++
net/mac80211/rx.c | 9 +-
net/mac80211/tx.c | 90 ++++++++++
9 files changed, 571 insertions(+), 4 deletions(-)

diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
index a3d7950..38b718f 100644
--- a/net/mac80211/cfg.c
+++ b/net/mac80211/cfg.c
@@ -2383,6 +2383,8 @@ static int ieee80211_update_mesh_config(struct wiphy *wiphy,
if (_chg_mesh_attr(NL80211_MESHCONF_CONNECTED_TO_AS, mask))
conf->dot11MeshConnectedToAuthServer =
nconf->dot11MeshConnectedToAuthServer;
+ if (_chg_mesh_attr(NL80211_MESHCONF_HEADER_CACHE_SIZE, mask))
+ conf->hdr_cache_size = nconf->hdr_cache_size;
ieee80211_mbss_info_change_notify(sdata, BSS_CHANGED_BEACON);
return 0;
}
diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
index cf71484..9262699 100644
--- a/net/mac80211/debugfs_netdev.c
+++ b/net/mac80211/debugfs_netdev.c
@@ -663,6 +663,8 @@ IEEE80211_IF_FILE(dot11MeshConnectedToMeshGate,
IEEE80211_IF_FILE(dot11MeshNolearn, u.mesh.mshcfg.dot11MeshNolearn, DEC);
IEEE80211_IF_FILE(dot11MeshConnectedToAuthServer,
u.mesh.mshcfg.dot11MeshConnectedToAuthServer, DEC);
+IEEE80211_IF_FILE(hdr_cache_size,
+ u.mesh.mshcfg.hdr_cache_size, DEC);
#endif

#define DEBUGFS_ADD_MODE(name, mode) \
@@ -786,6 +788,7 @@ static void add_mesh_config(struct ieee80211_sub_if_data *sdata)
MESHPARAMS_ADD(dot11MeshConnectedToMeshGate);
MESHPARAMS_ADD(dot11MeshNolearn);
MESHPARAMS_ADD(dot11MeshConnectedToAuthServer);
+ MESHPARAMS_ADD(hdr_cache_size);
#undef MESHPARAMS_ADD
}
#endif
diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
index 86ef0a4..3efb845 100644
--- a/net/mac80211/ieee80211_i.h
+++ b/net/mac80211/ieee80211_i.h
@@ -671,6 +671,25 @@ struct mesh_table {
atomic_t entries; /* Up to MAX_MESH_NEIGHBOURS */
};

+/**
+ * struct mesh_hdr_cache
+ *
+ * @enabled: Flag to denote if Header caching is enabled.
+ * @rhead: the rhashtable containing struct mesh_cache_entry, keyed by addr_key which
+ * For a 6addr format which is currently supported in the cache, the key
+ * is the external destination address or a5
+ * @walk_head: linked list containing all mesh_cache_entry objects
+ * @walk_lock: lock protecting walk_head
+ * @size: number of entries in the cache
+ */
+struct mesh_hdr_cache {
+ bool enabled;
+ struct rhashtable rhead;
+ struct hlist_head walk_head;
+ spinlock_t walk_lock; /* protects cache entries */
+ u16 size;
+};
+
struct ieee80211_if_mesh {
struct timer_list housekeeping_timer;
struct timer_list mesh_path_timer;
@@ -749,6 +768,7 @@ struct ieee80211_if_mesh {
struct mesh_table mpp_paths; /* Store paths for MPP&MAP */
int mesh_paths_generation;
int mpp_paths_generation;
+ struct mesh_hdr_cache hdr_cache;
};

#ifdef CONFIG_MAC80211_MESH
diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
index 5275f4f..ba90e4a 100644
--- a/net/mac80211/mesh.c
+++ b/net/mac80211/mesh.c
@@ -782,6 +782,8 @@ static void ieee80211_mesh_housekeeping(struct ieee80211_sub_if_data *sdata)
changed = mesh_accept_plinks_update(sdata);
ieee80211_mbss_info_change_notify(sdata, changed);

+ mesh_hdr_cache_manage(sdata);
+
mod_timer(&ifmsh->housekeeping_timer,
round_jiffies(jiffies +
IEEE80211_MESH_HOUSEKEEPING_INTERVAL));
diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h
index b2b717a..ffeef14 100644
--- a/net/mac80211/mesh.h
+++ b/net/mac80211/mesh.h
@@ -127,6 +127,44 @@ struct mesh_path {
u32 path_change_count;
};

+#define MESH_HDR_CACHE_TIMEOUT 8000 /* msecs */
+
+#define MESH_HDR_MAX_LEN 68 /* mac+mesh+rfc1042 hdr */
+
+/**
+ * struct mhdr_cache_entry - Cached Mesh header entry
+ * @addr_key: The Ethernet DA which is the key for this entry
+ * @hdr: The cached header
+ * @machdr_len: Total length of the mac header
+ * @hdrlen: Length of this header entry
+ * @key: Key corresponding to the nexthop stored in the header
+ * @pn_offs: Offset to PN which is updated for every xmit
+ * @band: band used for tx
+ * @walk_list: list containing all the cached header entries
+ * @rhash: rhashtable pointer
+ * @mpath: The Mesh path corresponding to the Mesh DA
+ * @mppath: The MPP entry corresponding to this DA
+ * @timestamp: Last used time of this entry
+ * @rcu: rcu to free this entry
+ * @path_change_count: Stored path change value corresponding to the mpath
+ */
+struct mhdr_cache_entry {
+ u8 addr_key[ETH_ALEN];
+ u8 hdr[MESH_HDR_MAX_LEN];
+ u16 machdr_len;
+ u16 hdrlen;
+ struct ieee80211_key *key;
+ u8 pn_offs;
+ u8 band;
+ struct hlist_node walk_list;
+ struct rhash_head rhash;
+ struct mesh_path *mpath;
+ struct mesh_path *mppath;
+ unsigned long timestamp;
+ struct rcu_head rcu;
+ u32 path_change_count;
+};
+
/* Recent multicast cache */
/* RMC_BUCKETS must be a power of 2, maximum 256 */
#define RMC_BUCKETS 256
@@ -299,6 +337,13 @@ void mesh_path_tx_root_frame(struct ieee80211_sub_if_data *sdata);

bool mesh_action_is_path_sel(struct ieee80211_mgmt *mgmt);

+struct mhdr_cache_entry *mesh_fill_cached_hdr(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb);
+void mesh_cache_hdr(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, struct mesh_path *mpath);
+void mesh_hdr_cache_manage(struct ieee80211_sub_if_data *sdata);
+void mesh_hdr_cache_flush(struct mesh_path *mpath, bool is_mpp);
+void mesh_queue_preq(struct mesh_path *mpath, u8 flags);
#ifdef CONFIG_MAC80211_MESH
static inline
u32 mesh_plink_inc_estab_count(struct ieee80211_sub_if_data *sdata)
diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
index 58ebdcd..7910ba5 100644
--- a/net/mac80211/mesh_hwmp.c
+++ b/net/mac80211/mesh_hwmp.c
@@ -18,8 +18,6 @@

#define MAX_PREQ_QUEUE_LEN 64

-static void mesh_queue_preq(struct mesh_path *, u8);
-
static inline u32 u32_field_get(const u8 *preq_elem, int offset, bool ae)
{
if (ae)
@@ -972,7 +970,7 @@ void mesh_rx_path_sel_frame(struct ieee80211_sub_if_data *sdata,
* Locking: the function must be called from within a rcu read lock block.
*
*/
-static void mesh_queue_preq(struct mesh_path *mpath, u8 flags)
+void mesh_queue_preq(struct mesh_path *mpath, u8 flags)
{
struct ieee80211_sub_if_data *sdata = mpath->sdata;
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
@@ -1250,6 +1248,10 @@ int mesh_nexthop_lookup(struct ieee80211_sub_if_data *sdata,
memcpy(hdr->addr1, next_hop->sta.addr, ETH_ALEN);
memcpy(hdr->addr2, sdata->vif.addr, ETH_ALEN);
ieee80211_mps_set_frame_flags(sdata, next_hop, hdr);
+ /* Cache the whole header so as to use next time rather than resolving
+ * and building it every time
+ */
+ mesh_cache_hdr(sdata, skb, mpath);
return 0;
}

diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
index acc1c29..3e79534 100644
--- a/net/mac80211/mesh_pathtbl.c
+++ b/net/mac80211/mesh_pathtbl.c
@@ -14,6 +14,7 @@
#include "wme.h"
#include "ieee80211_i.h"
#include "mesh.h"
+#include <linux/rhashtable.h>

static void mesh_path_free_rcu(struct mesh_table *tbl, struct mesh_path *mpath);

@@ -32,6 +33,56 @@ static const struct rhashtable_params mesh_rht_params = {
.hashfn = mesh_table_hash,
};

+static const struct rhashtable_params mesh_hdr_rht_params = {
+ .nelem_hint = 10,
+ .automatic_shrinking = true,
+ .key_len = ETH_ALEN,
+ .key_offset = offsetof(struct mhdr_cache_entry, addr_key),
+ .head_offset = offsetof(struct mhdr_cache_entry, rhash),
+ .hashfn = mesh_table_hash,
+};
+
+static void mesh_hdr_cache_entry_free(void *ptr, void *tblptr)
+{
+ struct mhdr_cache_entry *mhdr = ptr;
+
+ kfree_rcu(mhdr, rcu);
+}
+
+static void mesh_hdr_cache_deinit(struct ieee80211_sub_if_data *sdata)
+{
+ struct mesh_hdr_cache *cache;
+
+ cache = &sdata->u.mesh.hdr_cache;
+
+ if (!cache->enabled)
+ return;
+
+ rhashtable_free_and_destroy(&cache->rhead,
+ mesh_hdr_cache_entry_free, NULL);
+
+ cache->enabled = false;
+}
+
+static void mesh_hdr_cache_init(struct ieee80211_sub_if_data *sdata)
+{
+ struct ieee80211_local *local = sdata->local;
+ struct mesh_hdr_cache *cache;
+
+ cache = &sdata->u.mesh.hdr_cache;
+
+ cache->enabled = false;
+
+ if (!ieee80211_hw_check(&local->hw, SUPPORT_FAST_XMIT))
+ return;
+
+ rhashtable_init(&cache->rhead, &mesh_hdr_rht_params);
+ INIT_HLIST_HEAD(&cache->walk_head);
+ spin_lock_init(&cache->walk_lock);
+ cache->size = 0;
+ cache->enabled = true;
+}
+
static inline bool mpath_expired(struct mesh_path *mpath)
{
return (mpath->flags & MESH_PATH_ACTIVE) &&
@@ -381,6 +432,343 @@ struct mesh_path *mesh_path_new(struct ieee80211_sub_if_data *sdata,
return new_mpath;
}

+struct mhdr_cache_entry *mesh_fill_cached_hdr(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb)
+{
+ struct mesh_hdr_cache *cache;
+ struct mhdr_cache_entry *entry;
+ struct mesh_path *mpath, *mppath;
+ struct ieee80211s_hdr *meshhdr;
+ struct ieee80211_hdr *hdr;
+ struct sta_info *new_nhop;
+ struct ieee80211_key *key;
+ struct ethhdr *eth;
+ u8 sa[ETH_ALEN];
+
+ u8 tid;
+
+ cache = &sdata->u.mesh.hdr_cache;
+
+ if (!cache->enabled)
+ return NULL;
+
+ entry = rhashtable_lookup(&cache->rhead, skb->data,
+ mesh_hdr_rht_params);
+ if (!entry)
+ return NULL;
+
+ /* Avoid extra work in this path */
+ if (skb_headroom(skb) < (entry->hdrlen - ETH_HLEN + 2))
+ return NULL;
+
+ mpath = rcu_dereference(entry->mpath);
+ if (!mpath)
+ return NULL;
+
+ /* This check is with assumption that only 6addr frames are
+ * supported currently for caching
+ */
+ mppath = rcu_dereference(entry->mppath);
+ if (!mppath)
+ return NULL;
+
+ if (!(mpath->flags & MESH_PATH_ACTIVE))
+ return NULL;
+
+ if (mpath_expired(mpath))
+ return NULL;
+
+ /* If the skb is shared we need to obtain our own copy */
+ if (skb_shared(skb)) {
+ struct sk_buff *tmp_skb = skb;
+
+ skb = skb_clone(skb, GFP_ATOMIC);
+ kfree_skb(tmp_skb);
+
+ if (!skb)
+ return NULL;
+ }
+
+ /* In case there was a path refresh and update after we last used
+ * update the next hop addr.
+ */
+ spin_lock_bh(&mpath->state_lock);
+ if (entry->path_change_count != mpath->path_change_count) {
+ new_nhop = rcu_dereference(mpath->next_hop);
+ if (!new_nhop) {
+ spin_unlock_bh(&mpath->state_lock);
+ return NULL;
+ }
+ memcpy(&entry->hdr[4], new_nhop->sta.addr, ETH_ALEN);
+
+ /* update key. pn_offs will be same */
+ if (entry->key) {
+ key = rcu_access_pointer(new_nhop->ptk[new_nhop->ptk_idx]);
+ if (!key)
+ key = rcu_access_pointer(sdata->default_unicast_key);
+ rcu_assign_pointer(entry->key, key);
+ }
+ entry->path_change_count = mpath->path_change_count;
+ }
+ spin_unlock_bh(&mpath->state_lock);
+
+ /* backup eth SA to copy as eaddr2/SA in the mesh header */
+ eth = (struct ethhdr *)skb->data;
+ ether_addr_copy(sa, eth->h_source);
+
+ /* Pull DA:SA */
+ skb_pull(skb, ETH_ALEN * 2);
+
+ memcpy(skb_push(skb, entry->hdrlen), entry->hdr, entry->hdrlen);
+
+ meshhdr = (struct ieee80211s_hdr *)(skb->data + entry->machdr_len);
+ hdr = (struct ieee80211_hdr *)skb->data;
+
+ /* Update mutables */
+ tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+ *ieee80211_get_qos_ctl(hdr) = tid;
+
+ put_unaligned(cpu_to_le32(sdata->u.mesh.mesh_seqnum), &meshhdr->seqnum);
+ sdata->u.mesh.mesh_seqnum++;
+
+ memcpy(meshhdr->eaddr2, sa, ETH_ALEN);
+ meshhdr->ttl = sdata->u.mesh.mshcfg.dot11MeshTTL;
+
+ if (mpath->flags & (MESH_PATH_REQ_QUEUED | MESH_PATH_FIXED))
+ goto out;
+
+ /* Refresh the path, in case there is a change in nexthop after refresh
+ * hdr will be updated on next lookup
+ */
+ if (time_after(jiffies,
+ mpath->exp_time -
+ msecs_to_jiffies(sdata->u.mesh.mshcfg.path_refresh_time)) &&
+ !(mpath->flags & MESH_PATH_RESOLVING) &&
+ !(mpath->flags & MESH_PATH_FIXED)) {
+ mesh_queue_preq(mpath, PREQ_Q_F_START | PREQ_Q_F_REFRESH);
+ }
+
+out:
+ mppath->exp_time = jiffies;
+ entry->timestamp = jiffies;
+
+ return entry;
+}
+
+void mesh_cache_hdr(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, struct mesh_path *mpath)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct mesh_hdr_cache *cache;
+ struct mhdr_cache_entry *mhdr, *old_mhdr;
+ struct ieee80211s_hdr *meshhdr;
+ struct sta_info *next_hop;
+ struct ieee80211_key *key;
+ u8 band, pn_offs = 0, crypto_len = 0;
+ struct mesh_path *mppath;
+ u16 mshhdr_len;
+ int hdrlen;
+
+ if (sdata->noack_map)
+ return;
+
+ cache = &sdata->u.mesh.hdr_cache;
+
+ if (!cache->enabled)
+ return;
+
+ hdrlen = ieee80211_hdrlen(hdr->frame_control);
+
+ meshhdr = (struct ieee80211s_hdr *)(skb->data + hdrlen);
+
+ /* Currently supporting only 6addr hdr */
+ if (!(meshhdr->flags & MESH_FLAGS_AE_A5_A6))
+ return;
+
+ mshhdr_len = ieee80211_get_mesh_hdrlen(meshhdr);
+
+ spin_lock_bh(&cache->walk_lock);
+ if (cache->size > sdata->u.mesh.mshcfg.hdr_cache_size) {
+ spin_unlock_bh(&cache->walk_lock);
+ return;
+ }
+ spin_unlock_bh(&cache->walk_lock);
+
+ next_hop = rcu_dereference(mpath->next_hop);
+ if (!next_hop)
+ return;
+
+ /* This is required to keep the mppath alive */
+ mppath = mpp_path_lookup(sdata, meshhdr->eaddr1);
+
+ if (!mppath)
+ return;
+
+ band = info->band;
+
+ pn_offs = 0;
+ key = rcu_access_pointer(next_hop->ptk[next_hop->ptk_idx]);
+ if (!key)
+ key = rcu_access_pointer(sdata->default_unicast_key);
+
+ if (key) {
+ bool gen_iv, iv_spc;
+
+ gen_iv = key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_IV;
+ iv_spc = key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE;
+
+ if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE))
+ return;
+
+ if (key->flags & KEY_FLAG_TAINTED)
+ return;
+
+ switch (key->conf.cipher) {
+ case WLAN_CIPHER_SUITE_CCMP:
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ if (gen_iv)
+ pn_offs = hdrlen;
+ if (gen_iv || iv_spc)
+ crypto_len = IEEE80211_CCMP_HDR_LEN;
+ break;
+ default:
+ /* Limiting supported ciphers for testing */
+ return;
+ }
+ hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ }
+
+ if ((hdrlen + crypto_len + mshhdr_len + sizeof(rfc1042_header)) >
+ MESH_HDR_MAX_LEN) {
+ WARN_ON_ONCE(1);
+ return;
+ }
+
+ mhdr = kzalloc(sizeof(*mhdr), GFP_KERNEL);
+ if (!mhdr)
+ return;
+
+ memcpy(mhdr->addr_key, meshhdr->eaddr1, ETH_ALEN);
+
+ mhdr->machdr_len = hdrlen + crypto_len;
+ mhdr->hdrlen = mhdr->machdr_len + mshhdr_len + sizeof(rfc1042_header);
+ rcu_assign_pointer(mhdr->mpath, mpath);
+ rcu_assign_pointer(mhdr->mppath, mppath);
+ rcu_assign_pointer(mhdr->key, key);
+ mhdr->timestamp = jiffies;
+ mhdr->band = band;
+ mhdr->pn_offs = pn_offs;
+
+ if (pn_offs) {
+ /* ignore the invalid data getting copied to pn location since it will
+ * be overwritten during tx
+ */
+ memcpy(mhdr->hdr, skb->data, mhdr->machdr_len);
+
+ /* copy remaining hdr */
+ memcpy(mhdr->hdr + mhdr->machdr_len,
+ skb->data + mhdr->machdr_len - crypto_len,
+ mhdr->hdrlen - mhdr->machdr_len);
+ } else {
+ memcpy(mhdr->hdr, skb->data, mhdr->hdrlen);
+ }
+
+ if (key) {
+ hdr = (struct ieee80211_hdr *)mhdr->hdr;
+ hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ }
+
+ spin_lock_bh(&cache->walk_lock);
+ old_mhdr = rhashtable_lookup_get_insert_fast(&cache->rhead,
+ &mhdr->rhash,
+ mesh_hdr_rht_params);
+ if (old_mhdr) {
+ spin_unlock_bh(&cache->walk_lock);
+ kfree(mhdr);
+ return;
+ }
+
+ hlist_add_head(&mhdr->walk_list, &cache->walk_head);
+
+ cache->size++;
+ spin_unlock_bh(&cache->walk_lock);
+}
+
+void mesh_hdr_cache_manage(struct ieee80211_sub_if_data *sdata)
+{
+ struct mesh_hdr_cache *cache;
+ struct mhdr_cache_entry *entry;
+ struct hlist_node *n;
+
+ cache = &sdata->u.mesh.hdr_cache;
+
+ if (!cache->enabled)
+ return;
+
+ spin_lock_bh(&cache->walk_lock);
+ if (cache->size < ((sdata->u.mesh.mshcfg.hdr_cache_size * 2) / 3)) {
+ spin_unlock_bh(&cache->walk_lock);
+ return;
+ }
+
+ hlist_for_each_entry_safe(entry, n, &cache->walk_head, walk_list) {
+ if (time_before(jiffies,
+ entry->timestamp +
+ msecs_to_jiffies(MESH_HDR_CACHE_TIMEOUT)))
+ continue;
+
+ hlist_del_rcu(&entry->walk_list);
+ rhashtable_remove_fast(&cache->rhead, &entry->rhash, mesh_hdr_rht_params);
+ kfree_rcu(entry, rcu);
+ cache->size--;
+ }
+ spin_unlock_bh(&cache->walk_lock);
+}
+
+void mesh_hdr_cache_flush(struct mesh_path *mpath, bool is_mpp)
+{
+ struct ieee80211_sub_if_data *sdata = mpath->sdata;
+ struct mesh_hdr_cache *cache;
+ struct mhdr_cache_entry *entry;
+ struct hlist_node *n;
+ struct mesh_path *entry_mpath;
+
+ cache = &sdata->u.mesh.hdr_cache;
+
+ if (!cache->enabled)
+ return;
+
+ spin_lock_bh(&cache->walk_lock);
+ /* Only one header per mpp address is expected in the header cache */
+ if (is_mpp) {
+ entry = rhashtable_lookup(&cache->rhead, mpath->dst, mesh_hdr_rht_params);
+ if (entry) {
+ hlist_del_rcu(&entry->walk_list);
+ rhashtable_remove_fast(&cache->rhead, &entry->rhash, mesh_hdr_rht_params);
+ kfree_rcu(entry, rcu);
+ cache->size--;
+ }
+ spin_unlock_bh(&cache->walk_lock);
+ return;
+ }
+
+ hlist_for_each_entry_safe(entry, n, &cache->walk_head, walk_list) {
+ entry_mpath = rcu_dereference(entry->mpath);
+
+ if (!entry_mpath)
+ continue;
+
+ if (ether_addr_equal(entry_mpath->dst, mpath->dst)) {
+ hlist_del_rcu(&entry->walk_list);
+ rhashtable_remove_fast(&cache->rhead, &entry->rhash, mesh_hdr_rht_params);
+ kfree_rcu(entry, rcu);
+ cache->size--;
+ }
+ }
+ spin_unlock_bh(&cache->walk_lock);
+}
+
/**
* mesh_path_add - allocate and add a new path to the mesh path table
* @dst: destination address of the path (ETH_ALEN length)
@@ -521,6 +909,7 @@ static void mesh_path_free_rcu(struct mesh_table *tbl,

static void __mesh_path_del(struct mesh_table *tbl, struct mesh_path *mpath)
{
+ mesh_hdr_cache_flush(mpath, tbl == &mpath->sdata->u.mesh.mpp_paths);
hlist_del_rcu(&mpath->walk_list);
rhashtable_remove_fast(&tbl->rhead, &mpath->rhash, mesh_rht_params);
mesh_path_free_rcu(tbl, mpath);
@@ -739,7 +1128,10 @@ void mesh_path_flush_pending(struct mesh_path *mpath)
*/
void mesh_path_fix_nexthop(struct mesh_path *mpath, struct sta_info *next_hop)
{
+ struct sta_info *old_next_hop;
+
spin_lock_bh(&mpath->state_lock);
+ old_next_hop = rcu_dereference(mpath->next_hop);
mesh_path_assign_nexthop(mpath, next_hop);
mpath->sn = 0xffff;
mpath->metric = 0;
@@ -747,6 +1139,8 @@ void mesh_path_fix_nexthop(struct mesh_path *mpath, struct sta_info *next_hop)
mpath->exp_time = 0;
mpath->flags = MESH_PATH_FIXED | MESH_PATH_SN_VALID;
mesh_path_activate(mpath);
+ if (!old_next_hop || !ether_addr_equal(old_next_hop->addr, next_hop->addr))
+ mpath->path_change_count++;
spin_unlock_bh(&mpath->state_lock);
ewma_mesh_fail_avg_init(&next_hop->mesh->fail_avg);
/* init it at a low value - 0 start is tricky */
@@ -758,6 +1152,7 @@ void mesh_pathtbl_init(struct ieee80211_sub_if_data *sdata)
{
mesh_table_init(&sdata->u.mesh.mesh_paths);
mesh_table_init(&sdata->u.mesh.mpp_paths);
+ mesh_hdr_cache_init(sdata);
}

static
@@ -785,6 +1180,7 @@ void mesh_path_expire(struct ieee80211_sub_if_data *sdata)

void mesh_pathtbl_unregister(struct ieee80211_sub_if_data *sdata)
{
+ mesh_hdr_cache_deinit(sdata);
mesh_table_free(&sdata->u.mesh.mesh_paths);
mesh_table_free(&sdata->u.mesh.mpp_paths);
}
diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
index 3c08ae04..65557a9 100644
--- a/net/mac80211/rx.c
+++ b/net/mac80211/rx.c
@@ -2891,6 +2891,7 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
struct mesh_path *mppath;
char *proxied_addr;
char *mpp_addr;
+ bool update = false;

if (is_multicast_ether_addr(hdr->addr1)) {
mpp_addr = hdr->addr3;
@@ -2910,12 +2911,18 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
mpp_path_add(sdata, proxied_addr, mpp_addr);
} else {
spin_lock_bh(&mppath->state_lock);
- if (!ether_addr_equal(mppath->mpp, mpp_addr))
+ if (!ether_addr_equal(mppath->mpp, mpp_addr)) {
+ update = true;
memcpy(mppath->mpp, mpp_addr, ETH_ALEN);
+ }
mppath->exp_time = jiffies;
spin_unlock_bh(&mppath->state_lock);
}
rcu_read_unlock();
+
+ /* Flush any hdr, if external device moved to a new gate */
+ if (update)
+ mesh_hdr_cache_flush(mppath, true);
}

/* Frame has reached destination. Don't forward */
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 0e4efc0..98b5a1d 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -2691,6 +2691,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
skb->data + ETH_ALEN);

}
+
chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
if (!chanctx_conf) {
ret = -ENOTCONN;
@@ -3497,6 +3498,91 @@ ieee80211_xmit_fast_finish(struct ieee80211_sub_if_data *sdata,
return TX_CONTINUE;
}

+static bool ieee80211_mesh_xmit_fast(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, u32 ctrl_flags)
+{
+ struct ieee80211_local *local = sdata->local;
+ struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+ struct ieee80211_tx_data tx;
+ struct ieee80211_tx_info *info;
+ struct mhdr_cache_entry *entry;
+ u16 ethertype;
+ struct ieee80211_key *key;
+ struct sta_info *sta;
+
+ if (ctrl_flags & IEEE80211_TX_CTRL_SKIP_MPATH_LOOKUP)
+ return false;
+
+ if (ifmsh->mshcfg.dot11MeshNolearn)
+ return false;
+
+ if (!ieee80211_hw_check(&local->hw, SUPPORT_FAST_XMIT))
+ return false;
+
+ /* Add support for these cases later */
+ if (ifmsh->ps_peers_light_sleep || ifmsh->ps_peers_deep_sleep)
+ return false;
+
+ if (is_multicast_ether_addr(skb->data))
+ return false;
+
+ ethertype = (skb->data[12] << 8) | skb->data[13];
+
+ if (ethertype < ETH_P_802_3_MIN)
+ return false;
+
+ if (skb->sk && skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS)
+ return false;
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ skb_set_transport_header(skb,
+ skb_checksum_start_offset(skb));
+ if (skb_checksum_help(skb))
+ return false;
+ }
+
+ /* Fill cached header for this eth data */
+ entry = mesh_fill_cached_hdr(sdata, skb);
+
+ if (!entry)
+ return false;
+
+ sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);
+
+ info = IEEE80211_SKB_CB(skb);
+ memset(info, 0, sizeof(*info));
+ info->band = entry->band;
+ info->control.vif = &sdata->vif;
+ info->flags = IEEE80211_TX_CTL_FIRST_FRAGMENT |
+ IEEE80211_TX_CTL_DONTFRAG;
+
+ info->control.flags = IEEE80211_TX_CTRL_FAST_XMIT;
+
+#ifdef CONFIG_MAC80211_DEBUGFS
+ if (local->force_tx_status)
+ info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
+#endif
+
+ sta = entry->mpath->next_hop;
+ key = entry->key;
+
+ __skb_queue_head_init(&tx.skbs);
+
+ tx.flags = IEEE80211_TX_UNICAST;
+ tx.local = local;
+ tx.sdata = sdata;
+ tx.sta = sta;
+ tx.key = key;
+ tx.skb = skb;
+
+ ieee80211_xmit_fast_finish(sdata, sta, entry->pn_offs,
+ key, &tx);
+
+ __skb_queue_tail(&tx.skbs, skb);
+ ieee80211_tx_frags(local, &sdata->vif, sta, &tx.skbs, false);
+ return true;
+}
+
static bool ieee80211_xmit_fast(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta,
struct ieee80211_fast_tx *fast_tx,
@@ -4175,6 +4261,10 @@ void __ieee80211_subif_start_xmit(struct sk_buff *skb,

rcu_read_lock();

+ if (ieee80211_vif_is_mesh(&sdata->vif) &&
+ ieee80211_mesh_xmit_fast(sdata, skb, ctrl_flags))
+ goto out;
+
if (ieee80211_lookup_ra_sta(sdata, skb, &sta))
goto out_free;

--
2.7.4

2022-06-10 10:52:00

by Sriram R

[permalink] [raw]
Subject: [PATCH 2/3] cfg80211: Add provision for changing mesh header cache size

Add provision to update the header cache size. The default cache size
is 50 header entries corresponding to different external destination.
In case there is a need for a bigger cache depending on the network
topology, the hdr_cache_size config can be updated.

Signed-off-by: Sriram R <[email protected]>
---
include/net/cfg80211.h | 3 +++
include/uapi/linux/nl80211.h | 4 ++++
net/wireless/mesh.c | 3 +++
net/wireless/nl80211.c | 6 +++++-
4 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index 34bdf1d..ec19c62 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -2155,6 +2155,8 @@ struct bss_parameters {
* not be the optimal decision as a multi-hop route might be better. So
* if using this setting you will likely also want to disable
* dot11MeshForwarding and use another mesh routing protocol on top.
+ * @hdr_cache_size: Maximum number of entries the mesh header cache will
+ * hold before flushing old entries.
*/
struct mesh_config {
u16 dot11MeshRetryTimeout;
@@ -2188,6 +2190,7 @@ struct mesh_config {
u16 dot11MeshAwakeWindowDuration;
u32 plink_timeout;
bool dot11MeshNolearn;
+ u16 hdr_cache_size;
};

/**
diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h
index d9490e3..b22c497 100644
--- a/include/uapi/linux/nl80211.h
+++ b/include/uapi/linux/nl80211.h
@@ -4556,6 +4556,9 @@ enum nl80211_mesh_power_mode {
* will advertise that it is connected to a authentication server
* in the mesh formation field.
*
+ * @NL80211_MESHCONF_HEADER_CACHE_SIZE: Maximum size of the header cache
+ * used for caching headers corresponding to an external destination.
+ *
* @__NL80211_MESHCONF_ATTR_AFTER_LAST: internal use
*/
enum nl80211_meshconf_params {
@@ -4591,6 +4594,7 @@ enum nl80211_meshconf_params {
NL80211_MESHCONF_CONNECTED_TO_GATE,
NL80211_MESHCONF_NOLEARN,
NL80211_MESHCONF_CONNECTED_TO_AS,
+ NL80211_MESHCONF_HEADER_CACHE_SIZE,

/* keep last */
__NL80211_MESHCONF_ATTR_AFTER_LAST,
diff --git a/net/wireless/mesh.c b/net/wireless/mesh.c
index e4e3631..f606777 100644
--- a/net/wireless/mesh.c
+++ b/net/wireless/mesh.c
@@ -21,6 +21,8 @@
#define MESH_ROOT_CONFIRMATION_INTERVAL 2000
#define MESH_DEFAULT_PLINK_TIMEOUT 1800 /* timeout in seconds */

+#define MESH_DEFAULT_HEADER_CACHE_SIZE 50
+
/*
* Minimum interval between two consecutive PREQs originated by the same
* interface
@@ -79,6 +81,7 @@ const struct mesh_config default_mesh_config = {
.dot11MeshAwakeWindowDuration = MESH_DEFAULT_AWAKE_WINDOW,
.plink_timeout = MESH_DEFAULT_PLINK_TIMEOUT,
.dot11MeshNolearn = false,
+ .hdr_cache_size = MESH_DEFAULT_HEADER_CACHE_SIZE,
};

const struct mesh_setup default_mesh_setup = {
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index dee0fa9..eae69ea 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -7672,7 +7672,9 @@ static int nl80211_get_mesh_config(struct sk_buff *skb,
nla_put_u8(msg, NL80211_MESHCONF_NOLEARN,
cur_params.dot11MeshNolearn) ||
nla_put_u8(msg, NL80211_MESHCONF_CONNECTED_TO_AS,
- cur_params.dot11MeshConnectedToAuthServer))
+ cur_params.dot11MeshConnectedToAuthServer) ||
+ nla_put_u16(msg, NL80211_MESHCONF_HEADER_CACHE_SIZE,
+ cur_params.hdr_cache_size))
goto nla_put_failure;
nla_nest_end(msg, pinfoattr);
genlmsg_end(msg, hdr);
@@ -7888,6 +7890,8 @@ do { \
NL80211_MESHCONF_PLINK_TIMEOUT, nla_get_u32);
FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshNolearn, mask,
NL80211_MESHCONF_NOLEARN, nla_get_u8);
+ FILL_IN_MESH_PARAM_IF_SET(tb, cfg, hdr_cache_size, mask,
+ NL80211_MESHCONF_HEADER_CACHE_SIZE, nla_get_u16);
if (mask_out)
*mask_out = mask;

--
2.7.4

2022-06-10 10:52:00

by Sriram R

[permalink] [raw]
Subject: [PATCH 1/3] cfg80211: increase mesh config attribute bitmask size

Increase the mask size for indicating mesh config attributes from 32bit
to 64bit.

This is required for the subsequent patch to add new mesh config.

Signed-off-by: Sriram R <[email protected]>
---
include/net/cfg80211.h | 2 +-
net/mac80211/cfg.c | 4 ++--
net/wireless/nl80211.c | 6 +++---
net/wireless/rdev-ops.h | 2 +-
net/wireless/trace.h | 6 +++---
5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index cc8a988..34bdf1d 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -4237,7 +4237,7 @@ struct cfg80211_ops {
struct net_device *dev,
struct mesh_config *conf);
int (*update_mesh_config)(struct wiphy *wiphy,
- struct net_device *dev, u32 mask,
+ struct net_device *dev, u64 mask,
const struct mesh_config *nconf);
int (*join_mesh)(struct wiphy *wiphy, struct net_device *dev,
const struct mesh_config *conf,
diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
index f7896f2..a3d7950 100644
--- a/net/mac80211/cfg.c
+++ b/net/mac80211/cfg.c
@@ -2206,7 +2206,7 @@ static int ieee80211_get_mesh_config(struct wiphy *wiphy,
return 0;
}

-static inline bool _chg_mesh_attr(enum nl80211_meshconf_params parm, u32 mask)
+static inline bool _chg_mesh_attr(enum nl80211_meshconf_params parm, u64 mask)
{
return (mask >> (parm-1)) & 0x1;
}
@@ -2269,7 +2269,7 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
}

static int ieee80211_update_mesh_config(struct wiphy *wiphy,
- struct net_device *dev, u32 mask,
+ struct net_device *dev, u64 mask,
const struct mesh_config *nconf)
{
struct mesh_config *conf;
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index 740b294..dee0fa9 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -7750,10 +7750,10 @@ static const struct nla_policy

static int nl80211_parse_mesh_config(struct genl_info *info,
struct mesh_config *cfg,
- u32 *mask_out)
+ u64 *mask_out)
{
struct nlattr *tb[NL80211_MESHCONF_ATTR_MAX + 1];
- u32 mask = 0;
+ u64 mask = 0;
u16 ht_opmode;

#define FILL_IN_MESH_PARAM_IF_SET(tb, cfg, param, mask, attr, fn) \
@@ -7957,7 +7957,7 @@ static int nl80211_update_mesh_config(struct sk_buff *skb,
struct net_device *dev = info->user_ptr[1];
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct mesh_config cfg;
- u32 mask;
+ u64 mask;
int err;

if (wdev->iftype != NL80211_IFTYPE_MESH_POINT)
diff --git a/net/wireless/rdev-ops.h b/net/wireless/rdev-ops.h
index 439bcf5..e0fcaf12 100644
--- a/net/wireless/rdev-ops.h
+++ b/net/wireless/rdev-ops.h
@@ -330,7 +330,7 @@ rdev_get_mesh_config(struct cfg80211_registered_device *rdev,

static inline int
rdev_update_mesh_config(struct cfg80211_registered_device *rdev,
- struct net_device *dev, u32 mask,
+ struct net_device *dev, u64 mask,
const struct mesh_config *nconf)
{
int ret;
diff --git a/net/wireless/trace.h b/net/wireless/trace.h
index 228079d..bb4ce97d 100644
--- a/net/wireless/trace.h
+++ b/net/wireless/trace.h
@@ -1053,14 +1053,14 @@ TRACE_EVENT(rdev_return_int_mesh_config,
);

TRACE_EVENT(rdev_update_mesh_config,
- TP_PROTO(struct wiphy *wiphy, struct net_device *netdev, u32 mask,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev, u64 mask,
const struct mesh_config *conf),
TP_ARGS(wiphy, netdev, mask, conf),
TP_STRUCT__entry(
WIPHY_ENTRY
NETDEV_ENTRY
MESH_CFG_ENTRY
- __field(u32, mask)
+ __field(u64, mask)
),
TP_fast_assign(
WIPHY_ASSIGN;
@@ -1068,7 +1068,7 @@ TRACE_EVENT(rdev_update_mesh_config,
MESH_CFG_ASSIGN;
__entry->mask = mask;
),
- TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", mask: %u",
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", mask: %llu",
WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->mask)
);

--
2.7.4

2022-07-01 08:50:41

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH 0/3] Mesh Fast xmit support

>
> Sriram R (3):
> cfg80211: increase mesh config attribute bitmask size
> cfg80211: Add provision for changing mesh header cache size
>

Is there really that much point in making that configurable? I have no
idea how a user could possibly set this to a reasonable value?

Maybe it would make more sense to auto-size it somehow depending on
memory? Or just pick a reasonable upper bound and leave it at that?

johannes

2022-07-01 09:22:26

by Sriram R

[permalink] [raw]
Subject: RE: [PATCH 0/3] Mesh Fast xmit support

>-----Original Message-----
>From: Johannes Berg <[email protected]>
>Sent: Friday, July 1, 2022 2:19 PM
>To: Sriram R (QUIC) <[email protected]>
>Cc: [email protected]
>Subject: Re: [PATCH 0/3] Mesh Fast xmit support
>
>WARNING: This email originated from outside of Qualcomm. Please be wary of
>any links or attachments, and do not enable macros.
>
>>
>> Sriram R (3):
>> cfg80211: increase mesh config attribute bitmask size
>> cfg80211: Add provision for changing mesh header cache size
>>
>
>Is there really that much point in making that configurable? I have no idea how a
>user could possibly set this to a reasonable value?
Hi Johannes,
Initially it was set it to a default size of 50 when RFC was sent. There was a suggestion to
make it configurable where users could configure this cache size proportional
to the required/anticipated network capacity.
>
>Maybe it would make more sense to auto-size it somehow depending on
>memory? Or just pick a reasonable upper bound and leave it at that?
>
Right, setting a good upper bound should be a relatively easy option, if we
don’t need this to be configurable.
Thanks,
Sriram.R

2022-07-01 09:33:53

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH 0/3] Mesh Fast xmit support


>   Initially it was set it to a default size of 50 when RFC was sent.
> There was a suggestion to
> make it configurable where users could configure this cache size
> proportional to the required/anticipated network capacity.

Oh, right, I missed that this was in the discussion earlier.

The question is what are you afraid of? I mean, even setting it to 500
wouldn't be a huge amount of memory use (~50k), and probably mostly
sufficient regardless of the network? And if you never see all those
nodes, then it wouldn't use all that memory either.

Timing out old entries will also keep memory usage down.

So are you worried about worst-case behaviour in attacks, e.g. somebody
attempting to join the mesh? But then if you're worried about that I
guess you have bigger problems (and should be using secure mesh), such
as the number of station entries?

Or an attacker mutating their Ethernet address behind some gateway? But
they still need to convince the station to even want to send traffic
there...

But even then, setting a much higher limit than 50 should cope with
these cases, while giving enough breathing room for the real usage?

johannes

2022-07-01 09:59:03

by Sriram R

[permalink] [raw]
Subject: RE: [PATCH 0/3] Mesh Fast xmit support

>-----Original Message-----
>From: Johannes Berg <[email protected]>
>Sent: Friday, July 1, 2022 2:57 PM
>To: Sriram R (QUIC) <[email protected]>; [email protected]
>Cc: [email protected]
>Subject: Re: [PATCH 0/3] Mesh Fast xmit support
>
>WARNING: This email originated from outside of Qualcomm. Please be wary of
>any links or attachments, and do not enable macros.
>
>> Initially it was set it to a default size of 50 when RFC was sent.
>> There was a suggestion to
>> make it configurable where users could configure this cache size
>> proportional to the required/anticipated network capacity.
>
>Oh, right, I missed that this was in the discussion earlier.
>
>The question is what are you afraid of? I mean, even setting it to 500 wouldn't
>be a huge amount of memory use (~50k), and probably mostly sufficient
>regardless of the network? And if you never see all those nodes, then it wouldn't
>use all that memory either.
>
>Timing out old entries will also keep memory usage down.
>
>So are you worried about worst-case behaviour in attacks, e.g. somebody
>attempting to join the mesh? But then if you're worried about that I guess you
>have bigger problems (and should be using secure mesh), such as the number of
>station entries?
>
>Or an attacker mutating their Ethernet address behind some gateway? But they
>still need to convince the station to even want to send traffic there...
>
>But even then, setting a much higher limit than 50 should cope with these cases,
>while giving enough breathing room for the real usage?
>
Hi Johannes,

The only concern/reason is to not silently increase the memory requirement of Mesh
support with this patch. So was skeptical on having a higher cache size(like 250 or 500 max).
Hence had a value of 50 and left the configuration part for devices which needed higher
cache.
But as you mentioned, this is only runtime max memory and not default.
So we should be fine to set some high limit, If above is not a concern could we stick to
an upper limit of ~150-200 ?

Apart from that, though the points you mentioned are quite possible, the cache
Management logic will ensure to cleanup stale entries and in worst case will
use regular header generation process if cache is full. So I feel that should ensure
things work as normal even under attack.

Thanks,
Sriram.R

2022-07-01 10:00:42

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH 0/3] Mesh Fast xmit support

Hi,

>    The only concern/reason is to not silently increase the memory
> requirement of Mesh
> support with this patch.

OK.

> So was skeptical on having a higher cache size(like 250 or 500 max).
> Hence had a value of 50 and left the configuration part for devices
> which needed higher
> cache.
> But as you mentioned, this is only runtime max memory and not default.
>  So we should be fine to set some high limit, If above is not a
> concern could we stick to
> an upper limit of ~150-200 ?

Right, I'm fine with that. I was just throwing out 500 as a random
number to show that it's not really a huge memory requirement.

> Apart from that, though the points you mentioned are quite possible,
> the cache
> Management logic will ensure to cleanup stale entries and in worst
> case will
> use regular header generation process if cache is full. So I feel that
> should ensure
> things work as normal even under attack.

Right.

johannes

2022-07-01 10:01:22

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH 0/3] Mesh Fast xmit support

On Fri, 2022-07-01 at 11:59 +0200, Johannes Berg wrote:
>
> > So was skeptical on having a higher cache size(like 250 or 500 max).
> > Hence had a value of 50 and left the configuration part for devices
> > which needed higher
> > cache.
> > But as you mentioned, this is only runtime max memory and not default.
> >  So we should be fine to set some high limit, If above is not a
> > concern could we stick to
> > an upper limit of ~150-200 ?
>
> Right, I'm fine with that. I was just throwing out 500 as a random
> number to show that it's not really a huge memory requirement.
>

But maybe Felix wants to comment? Felix?

johannes

2022-07-15 02:44:43

by Sriram R

[permalink] [raw]
Subject: RE: [PATCH 0/3] Mesh Fast xmit support

>-----Original Message-----
>From: Johannes Berg <[email protected]>
>Sent: Friday, July 1, 2022 3:30 PM
>To: Sriram R (QUIC) <[email protected]>; [email protected]
>Cc: [email protected]
>Subject: Re: [PATCH 0/3] Mesh Fast xmit support
>
>WARNING: This email originated from outside of Qualcomm. Please be wary of
>any links or attachments, and do not enable macros.
>
>On Fri, 2022-07-01 at 11:59 +0200, Johannes Berg wrote:
>>
>> > So was skeptical on having a higher cache size(like 250 or 500 max).
>> > Hence had a value of 50 and left the configuration part for devices
>> > which needed higher cache.
>> > But as you mentioned, this is only runtime max memory and not default.
>> > So we should be fine to set some high limit, If above is not a
>> > concern could we stick to an upper limit of ~150-200 ?
>>
>> Right, I'm fine with that. I was just throwing out 500 as a random
>> number to show that it's not really a huge memory requirement.
>>
>
>But maybe Felix wants to comment? Felix?
Hi Felix,

Could you kindly share your comments on this.

Thanks,
Sriram.R

2022-07-17 05:04:09

by Felix Fietkau

[permalink] [raw]
Subject: Re: [PATCH 0/3] Mesh Fast xmit support


On 15.07.22 04:16, Sriram R (QUIC) wrote:
>>-----Original Message-----
>>From: Johannes Berg <[email protected]>
>>Sent: Friday, July 1, 2022 3:30 PM
>>To: Sriram R (QUIC) <[email protected]>; [email protected]
>>Cc: [email protected]
>>Subject: Re: [PATCH 0/3] Mesh Fast xmit support
>>
>>WARNING: This email originated from outside of Qualcomm. Please be wary of
>>any links or attachments, and do not enable macros.
>>
>>On Fri, 2022-07-01 at 11:59 +0200, Johannes Berg wrote:
>>>
>>> > So was skeptical on having a higher cache size(like 250 or 500 max).
>>> > Hence had a value of 50 and left the configuration part for devices
>>> > which needed higher cache.
>>> > But as you mentioned, this is only runtime max memory and not default.
>>> > So we should be fine to set some high limit, If above is not a
>>> > concern could we stick to an upper limit of ~150-200 ?
>>>
>>> Right, I'm fine with that. I was just throwing out 500 as a random
>>> number to show that it's not really a huge memory requirement.
>>>
>>
>>But maybe Felix wants to comment? Felix?
> Hi Felix,
>
> Could you kindly share your comments on this.
I agree with making it big enough so that almost nobody has to tune it.
I think 512 would be a reasonable default.
By the way, if I'm counting correctly, you might be able to reduce the
size of the cache entries a bit by moving the 'key' field below the
'band' field, getting rid of some padding.

- Felix

2022-07-17 07:35:10

by Sriram R

[permalink] [raw]
Subject: RE: [PATCH 0/3] Mesh Fast xmit support

>>>> > So was skeptical on having a higher cache size(like 250 or 500 max).
>>>> > Hence had a value of 50 and left the configuration part for
>>>> > devices which needed higher cache.
>>>> > But as you mentioned, this is only runtime max memory and not default.
>>>> > So we should be fine to set some high limit, If above is not a
>>>> > concern could we stick to an upper limit of ~150-200 ?
>>>>
>>>> Right, I'm fine with that. I was just throwing out 500 as a random
>>>> number to show that it's not really a huge memory requirement.
>>>>
>>>
>>>But maybe Felix wants to comment? Felix?
>> Hi Felix,
>>
>> Could you kindly share your comments on this.
>I agree with making it big enough so that almost nobody has to tune it.
>I think 512 would be a reasonable default.
Sure.
>By the way, if I'm counting correctly, you might be able to reduce the size of the
>cache entries a bit by moving the 'key' field below the 'band' field, getting rid of
>some padding.
Oh okay, Thanks for checking, let me revisit this packing.

Regards,
Sriram.R

>
>- Felix