The blamed commit broke tc-taprio schedules such as this one:
tc qdisc replace dev $swp1 root taprio \
num_tc 8 \
map 0 1 2 3 4 5 6 7 \
queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
base-time 0 \
sched-entry S 0x7f 990000 \
sched-entry S 0x80 10000 \
flags 0x2
because the gate entry for TC 7 (S 0x80 10000 ns) now has a static guard
band added earlier than its 'gate close' event, such that packet
overruns won't occur in the worst case of the largest packet possible.
Since guard bands are statically determined based on the per-tc
QSYS_QMAXSDU_CFG_* with a fallback on the port-based QSYS_PORT_MAX_SDU,
we need to discuss what happens with TC 7 depending on kernel version,
since the driver, prior to commit 55a515b1f5a9 ("net: dsa: felix: drop
oversized frames with tc-taprio instead of hanging the port"), did not
touch QSYS_QMAXSDU_CFG_*, and therefore relied on QSYS_PORT_MAX_SDU.
1 (before vsc9959_tas_guard_bands_update): QSYS_PORT_MAX_SDU defaults to
1518, and at gigabit this introduces a static guard band (independent
of packet sizes) of 12144 ns, plus QSYS::HSCH_MISC_CFG.FRM_ADJ (bit
time of 20 octets => 160 ns). But this is larger than the time window
itself, of 10000 ns. So, the queue system never considers a frame with
TC 7 as eligible for transmission, since the gate practically never
opens, and these frames are forever stuck in the TX queues and hang
the port.
2 (after vsc9959_tas_guard_bands_update): Under the sole goal of
enabling oversized frame dropping, we make an effort to set
QSYS_QMAXSDU_CFG_7 to 1230 bytes. But QSYS_QMAXSDU_CFG_7 plays
one more role, which we did not take into account: per-tc static guard
band, expressed in L2 byte time (auto-adjusted for FCS and L1 overhead).
There is a discrepancy between what the driver thinks (that there is
no guard band, and 100% of min_gate_len[tc] is available for egress
scheduling) and what the hardware actually does (crops the equivalent
of QSYS_QMAXSDU_CFG_7 ns out of min_gate_len[tc]). In practice, this
means that the hardware thinks it has exactly 0 ns for scheduling tc 7.
In both cases, even minimum sized Ethernet frames are stuck on egress
rather than being considered for scheduling on TC 7, even if they would
fit given a proper configuration. Considering the current situation,
with vsc9959_tas_guard_bands_update(), frames between 60 octets and 1230
octets in size are not eligible for oversized dropping (because they are
smaller than QSYS_QMAXSDU_CFG_7), but won't be considered as eligible
for scheduling either, because the min_gate_len[7] (10000 ns) minus the
guard band determined by QSYS_QMAXSDU_CFG_7 (1230 octets * 8 ns per
octet == 9840 ns) minus the guard band auto-added for L1 overhead by
QSYS::HSCH_MISC_CFG.FRM_ADJ (20 octets * 8 ns per octet == 160 octets)
leaves 0 ns for scheduling in the queue system proper.
Investigating the hardware behavior, it becomes apparent that the queue
system needs precisely 33 ns of 'gate open' time in order to consider a
frame as eligible for scheduling to a tc. So the solution to this
problem is to amend vsc9959_tas_guard_bands_update(), by giving the
per-tc guard bands less space by exactly 33 ns, just enough for one
frame to be scheduled in that interval. This allows the queue system to
make forward progress for that port-tc, and prevents it from hanging.
Fixes: 297c4de6f780 ("net: dsa: felix: re-enable TAS guard band mode")
Reported-by: Xiaoliang Yang <[email protected]>
Signed-off-by: Vladimir Oltean <[email protected]>
---
v1->v2:
- add Xiaoliang's Reported-by: tag, since he did report the correct
issue after all (my bad)
- reword commit message and explanations
- reserve just 33 ns of min_gate_len[tc] as useful space for scheduling,
rather than one whole max MTU frame worth of time. In the given
example of 10 us tc-taprio interval, this raises our useful max SDU
to 1225, compared to 601 octets in v1.
drivers/net/dsa/ocelot/felix_vsc9959.c | 35 +++++++++++++++++++++++---
1 file changed, 31 insertions(+), 4 deletions(-)
diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
index 1cdce8a98d1d..262be2cf4e4b 100644
--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
+++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
@@ -22,6 +22,7 @@
#define VSC9959_NUM_PORTS 6
#define VSC9959_TAS_GCL_ENTRY_MAX 63
+#define VSC9959_TAS_MIN_GATE_LEN_NS 33
#define VSC9959_VCAP_POLICER_BASE 63
#define VSC9959_VCAP_POLICER_MAX 383
#define VSC9959_SWITCH_PCI_BAR 4
@@ -1478,6 +1479,23 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
mdiobus_free(felix->imdio);
}
+/* The switch considers any frame (regardless of size) as eligible for
+ * transmission if the traffic class gate is open for at least 33 ns.
+ * Overruns are prevented by cropping an interval at the end of the gate time
+ * slot for which egress scheduling is blocked, but we need to still keep 33 ns
+ * available for one packet to be transmitted, otherwise the port tc will hang.
+ * This function returns the size of a gate interval that remains available for
+ * setting the guard band, after reserving the space for one egress frame.
+ */
+static u64 vsc9959_tas_remaining_gate_len_ps(u64 gate_len_ns)
+{
+ /* Gate always open */
+ if (gate_len_ns == U64_MAX)
+ return U64_MAX;
+
+ return (gate_len_ns - VSC9959_TAS_MIN_GATE_LEN_NS) * PSEC_PER_NSEC;
+}
+
/* Extract shortest continuous gate open intervals in ns for each traffic class
* of a cyclic tc-taprio schedule. If a gate is always open, the duration is
* considered U64_MAX. If the gate is always closed, it is considered 0.
@@ -1596,10 +1614,13 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
vsc9959_tas_min_gate_lengths(ocelot_port->taprio, min_gate_len);
for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
+ u64 remaining_gate_len_ps;
u32 max_sdu;
- if (min_gate_len[tc] == U64_MAX /* Gate always open */ ||
- min_gate_len[tc] * PSEC_PER_NSEC > needed_bit_time_ps) {
+ remaining_gate_len_ps =
+ vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]);
+
+ if (remaining_gate_len_ps > needed_bit_time_ps) {
/* Setting QMAXSDU_CFG to 0 disables oversized frame
* dropping.
*/
@@ -1612,9 +1633,15 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
/* If traffic class doesn't support a full MTU sized
* frame, make sure to enable oversize frame dropping
* for frames larger than the smallest that would fit.
+ *
+ * However, the exact same register, QSYS_QMAXSDU_CFG_*,
+ * controls not only oversized frame dropping, but also
+ * per-tc static guard band lengths, so it reduces the
+ * useful gate interval length. Therefore, be careful
+ * to calculate a guard band (and therefore max_sdu)
+ * that still leaves 33 ns available in the time slot.
*/
- max_sdu = div_u64(min_gate_len[tc] * PSEC_PER_NSEC,
- picos_per_byte);
+ max_sdu = div_u64(remaining_gate_len_ps, picos_per_byte);
/* A TC gate may be completely closed, which is a
* special case where all packets are oversized.
* Any limit smaller than 64 octets accomplishes this
--
2.34.1
Am 2022-09-05 19:01, schrieb Vladimir Oltean:
> The blamed commit broke tc-taprio schedules such as this one:
>
> tc qdisc replace dev $swp1 root taprio \
> num_tc 8 \
> map 0 1 2 3 4 5 6 7 \
> queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
> base-time 0 \
> sched-entry S 0x7f 990000 \
> sched-entry S 0x80 10000 \
> flags 0x2
>
> because the gate entry for TC 7 (S 0x80 10000 ns) now has a static
> guard
> band added earlier than its 'gate close' event, such that packet
> overruns won't occur in the worst case of the largest packet possible.
>
> Since guard bands are statically determined based on the per-tc
> QSYS_QMAXSDU_CFG_* with a fallback on the port-based QSYS_PORT_MAX_SDU,
> we need to discuss what happens with TC 7 depending on kernel version,
> since the driver, prior to commit 55a515b1f5a9 ("net: dsa: felix: drop
> oversized frames with tc-taprio instead of hanging the port"), did not
> touch QSYS_QMAXSDU_CFG_*, and therefore relied on QSYS_PORT_MAX_SDU.
>
> 1 (before vsc9959_tas_guard_bands_update): QSYS_PORT_MAX_SDU defaults
> to
> 1518, and at gigabit this introduces a static guard band (independent
> of packet sizes) of 12144 ns, plus QSYS::HSCH_MISC_CFG.FRM_ADJ (bit
> time of 20 octets => 160 ns). But this is larger than the time window
> itself, of 10000 ns. So, the queue system never considers a frame
> with
> TC 7 as eligible for transmission, since the gate practically never
> opens, and these frames are forever stuck in the TX queues and hang
> the port.
>
> 2 (after vsc9959_tas_guard_bands_update): Under the sole goal of
> enabling oversized frame dropping, we make an effort to set
> QSYS_QMAXSDU_CFG_7 to 1230 bytes. But QSYS_QMAXSDU_CFG_7 plays
> one more role, which we did not take into account: per-tc static
> guard
> band, expressed in L2 byte time (auto-adjusted for FCS and L1
> overhead).
> There is a discrepancy between what the driver thinks (that there is
> no guard band, and 100% of min_gate_len[tc] is available for egress
> scheduling) and what the hardware actually does (crops the equivalent
> of QSYS_QMAXSDU_CFG_7 ns out of min_gate_len[tc]). In practice, this
> means that the hardware thinks it has exactly 0 ns for scheduling tc
> 7.
>
> In both cases, even minimum sized Ethernet frames are stuck on egress
> rather than being considered for scheduling on TC 7, even if they would
> fit given a proper configuration. Considering the current situation,
> with vsc9959_tas_guard_bands_update(), frames between 60 octets and
> 1230
> octets in size are not eligible for oversized dropping (because they
> are
> smaller than QSYS_QMAXSDU_CFG_7), but won't be considered as eligible
> for scheduling either, because the min_gate_len[7] (10000 ns) minus the
> guard band determined by QSYS_QMAXSDU_CFG_7 (1230 octets * 8 ns per
> octet == 9840 ns) minus the guard band auto-added for L1 overhead by
> QSYS::HSCH_MISC_CFG.FRM_ADJ (20 octets * 8 ns per octet == 160 octets)
> leaves 0 ns for scheduling in the queue system proper.
>
> Investigating the hardware behavior, it becomes apparent that the queue
> system needs precisely 33 ns of 'gate open' time in order to consider a
> frame as eligible for scheduling to a tc. So the solution to this
> problem is to amend vsc9959_tas_guard_bands_update(), by giving the
> per-tc guard bands less space by exactly 33 ns, just enough for one
> frame to be scheduled in that interval. This allows the queue system to
> make forward progress for that port-tc, and prevents it from hanging.
>
> Fixes: 297c4de6f780 ("net: dsa: felix: re-enable TAS guard band mode")
> Reported-by: Xiaoliang Yang <[email protected]>
> Signed-off-by: Vladimir Oltean <[email protected]>
I haven't looked at the overall code, but the solution described
above sounds good.
FWIW, I don't think such a schedule, where exactly one frame
can be sent, is very likely in the wild though. Imagine a piece
of software is generating one frame per cycle. It might happen
that during one (hardware) cycle there is no frame ready (because
it is software and it jitters), but then in the next cycle, there
are now two frames ready. In that case you'll always lag one frame
behind and you'll never recover from it.
Either I'd make sure I can send at two frames in one cycle, or
my software would only send a frame every other cycle.
Thanks for taking care of this!
-michael
On Tue, Sep 06, 2022 at 12:53:20AM +0200, Michael Walle wrote:
> I haven't looked at the overall code, but the solution described
> above sounds good.
>
> FWIW, I don't think such a schedule, where exactly one frame
> can be sent, is very likely in the wild though. Imagine a piece
> of software is generating one frame per cycle. It might happen
> that during one (hardware) cycle there is no frame ready (because
> it is software and it jitters), but then in the next cycle, there
> are now two frames ready. In that case you'll always lag one frame
> behind and you'll never recover from it.
>
> Either I'd make sure I can send at two frames in one cycle, or
> my software would only send a frame every other cycle.
A 10 us interval is a 10 us interval, it shouldn't matter if you slice
it up as one 1250B frame, or two 500B frames, or four 200B frames, etc.
Except with the Microchip hardware implementation, it does. In v1, we
were slicing the 10 us interval in half for useful traffic and half for
the guard band. So we could fit more small packets in 5 us. In v2, at
your proposal, we are slicing it in 33 ns for the useful traffic, and
10 us - 33 ns for the guard band. This indeed allows for a single
packet, be it big or small. It's how the hardware works; without any
other input data point, a slicing point needs to be put somewhere.
Somehow it's just as arbitrary in v2 as where it was in v1, just
optimized for a different metric which you're now saying is less practical.
By the way, I was a fool in last year's discussion on guard bands for
saying that there isn't any way for the user to control per-tc MTU.
IEEE 802.1Qbv, later standardized as IEEE 802.1Q clause 8.6.8.4
Enhancements for scheduled traffic, does contain a queueMaxSDUTable
structure with queueMaxSDU elements. I guess I have no choice except to
add this to the tc-taprio UAPI in a net-next patch, because as I've
explained above, even though I've solved the port hanging issue, this
hardware needs more fine tuning to obtain a differentiation between many
small packets vs few large packets per interval.
Am 2022-09-06 02:11, schrieb Vladimir Oltean:
> On Tue, Sep 06, 2022 at 12:53:20AM +0200, Michael Walle wrote:
>> I haven't looked at the overall code, but the solution described
>> above sounds good.
>>
>> FWIW, I don't think such a schedule, where exactly one frame
>> can be sent, is very likely in the wild though. Imagine a piece
>> of software is generating one frame per cycle. It might happen
>> that during one (hardware) cycle there is no frame ready (because
>> it is software and it jitters), but then in the next cycle, there
>> are now two frames ready. In that case you'll always lag one frame
>> behind and you'll never recover from it.
>>
>> Either I'd make sure I can send at two frames in one cycle, or
>> my software would only send a frame every other cycle.
>
> A 10 us interval is a 10 us interval, it shouldn't matter if you slice
> it up as one 1250B frame, or two 500B frames, or four 200B frames, etc.
> Except with the Microchip hardware implementation, it does. In v1, we
> were slicing the 10 us interval in half for useful traffic and half for
> the guard band. So we could fit more small packets in 5 us. In v2, at
> your proposal, we are slicing it in 33 ns for the useful traffic, and
> 10 us - 33 ns for the guard band. This indeed allows for a single
> packet, be it big or small. It's how the hardware works; without any
> other input data point, a slicing point needs to be put somewhere.
> Somehow it's just as arbitrary in v2 as where it was in v1, just
> optimized for a different metric which you're now saying is less
> practical.
I actually checked the code before writing and saw that one could
change the guard band by setting the MTU of the interface. I though,
"ah ok, then there is no issue". After sleeping, I noticed that you'd
restrict the size of all the frames on the interface. Doh ;)
-michael
> By the way, I was a fool in last year's discussion on guard bands for
> saying that there isn't any way for the user to control per-tc MTU.
> IEEE 802.1Qbv, later standardized as IEEE 802.1Q clause 8.6.8.4
> Enhancements for scheduled traffic, does contain a queueMaxSDUTable
> structure with queueMaxSDU elements. I guess I have no choice except to
> add this to the tc-taprio UAPI in a net-next patch, because as I've
> explained above, even though I've solved the port hanging issue, this
> hardware needs more fine tuning to obtain a differentiation between
> many
> small packets vs few large packets per interval.