Test Configuration
            This test was performed using iPerf2 with UDP and L4S (Low Latency, Low Loss, Scalable Throughput) enabled. The test was run on May 17, 2025, at 16:14:43 PDT.
            
            
                Command:
                src/iperf -c 10.19.85.66%eth1 -i 1 -e --udp-l4s
            
            
            Command Parameters Explained:
            
                - -c 10.19.85.66%eth1: Connect to the server at IP 10.19.85.66 via interface eth1
- -i 1: Report interval of 1 second
- -e: Enhanced output format with additional metrics
- --udp-l4s: Enable UDP with L4S (Low Latency, Low Loss, Scalable Throughput) protocol
- No -b parameter: Since no bandwidth limit was specified, the test runs in capacity seeking mode
Connection Details:
            
                - Client IP: 10.19.85.56 (via eth1)
- Server IP: 10.19.85.66
- Client Port: 38047
- Server Port: 5001 (standard iPerf port)
- Protocol: UDP with L4S
- Datagram Size: 1470 bytes
- UDP Buffer Size: 208 KByte (default)
- Test Mode: Capacity seeking mode (no bandwidth limit specified with -b)
                Note: UDP-L4S test runs in capacity seeking mode when no bandwidth limit is specified with the -b parameter. This means the test will automatically try to utilize as much bandwidth as possible, adapting to network conditions.
             
        
        
            Test Results
            [ ID] Interval        Transfer     Bandwidth   Write/Err/Timeo  PPS  CE=cnt(%) Duration=avg/min/max ms (cnt)
[  1] 0.00-1.00 sec  1.21 MBytes  10.1 Mbits/sec   863/0/0     863 pps 0(0.0%) 0.000/0.000/0.000 0
[  1] 1.00-2.00 sec  3.73 MBytes  31.3 Mbits/sec   2663/0/0    2663 pps 0(0.0%) 0.000/0.000/0.000 0
[  1] 2.00-3.00 sec  9.87 MBytes  82.8 Mbits/sec   7040/0/0    7040 pps 1338(19.0%) 9.371/0.009/99.623 17
[  1] 3.00-4.00 sec  11.1 MBytes  93.3 Mbits/sec   7932/0/0    7932 pps 944(11.9%) 3.736/0.005/50.186 29
[  1] 4.00-5.00 sec  10.7 MBytes  89.5 Mbits/sec   7610/0/0    7610 pps 1918(25.2%) 6.866/0.004/98.972 33
[  1] 5.00-6.00 sec  11.3 MBytes  94.6 Mbits/sec   8041/0/0    8041 pps 1024(12.7%) 2.447/0.005/24.823 47
[  1] 6.00-7.00 sec  11.1 MBytes  93.1 Mbits/sec   7916/0/0    7916 pps 1820(23.0%) 1.953/0.004/49.521 102
[  1] 7.00-8.00 sec  10.5 MBytes  88.4 Mbits/sec   7518/0/0    7518 pps 973(12.9%) 4.335/0.004/49.998 26
[  1] 8.00-9.00 sec  24.3 MBytes   204 Mbits/sec   17350/0/0   17350 pps 6253(36.0%) 1.115/0.002/154.320 354
[  1] 9.00-10.00 sec  10.3 MBytes  86.2 Mbits/sec   7333/0/0    7333 pps 1181(16.1%) 2.411/0.005/94.360 63
[  1] 0.00-10.00 sec   104 MBytes  87.3 Mbits/sec   74268/0/0    6752 pps 15451(20.8%) 2.188/0.002/154.320 671
[  1] Sent 74269 datagrams
            
            
                Output Columns Explained:
                
                    - Interval: Time interval in seconds since the beginning of the test
- Transfer: Amount of data transferred during the interval
- Bandwidth: Throughput rate during the interval
- Write/Err/Timeo: Number of writes / errors / timeouts
- PPS: Packets per second
- CE=cnt(%): Congestion Experienced - count and percentage of packets marked with CE (Congestion Experienced)
- Duration=avg/min/max ms (cnt): Average/minimum/maximum durations in milliseconds of congestion state active and the count of congestion active state. State changes when CE goes from zero to non-zero or vice versa
 
        
        
        
            Analysis
            
            Bandwidth Performance
            The test shows a ramp-up period during the first two seconds, followed by relatively stable bandwidth around 90 Mbps, with a significant spike to 204 Mbps during the 8-9 second interval. The average bandwidth across the entire 10-second test was 87.3 Mbps.
            
            This bandwidth pattern is consistent with capacity seeking behavior, which occurs when no bandwidth limit (-b parameter) is specified for a UDP-L4S test. We can observe:
            
                - Initial ramp-up from 10.1 Mbps to 82.8 Mbps during the first 3 seconds
- Stabilization around 90 Mbps as the test finds network capacity
- Probing for additional capacity at 8-9 seconds (204 Mbps spike)
- Return to stable operation after the probe
Congestion Control (L4S)
            The L4S protocol shows active congestion management with CE (Congestion Experienced) markings. Congestion marking begins at the 2-second mark, indicating the network reached a point where congestion management became necessary. The CE markings vary significantly throughout the test:
            
                - No congestion (0%) in the first two seconds during ramp-up
- Moderate congestion (11.9%-25.2%) during steady-state operation
- High congestion (36.0%) during the bandwidth spike at 8-9 seconds
- Overall average congestion marking of 20.8% across the entire test
Latency Performance
            The latency characteristics show:
            
                - No congestion state activation during the first two seconds
- When congestion state becomes active (CE > 0), the duration measurements begin
- Average congestion state duration generally between 1-9 ms
- Maximum congestion state duration spike of 154.32 ms during the high bandwidth period
- Overall average congestion state duration of 2.188 ms across the entire test
- The count values (17, 29, 33, etc.) represent the number of times the congestion state switched from non-active to active
Packet Delivery
            The test shows excellent reliability with no errors or timeouts reported (all values in the Write/Err/Timeo column show 0 for errors and timeouts). A total of 74,269 datagrams were sent over the 10-second period.
            
            
                Key Takeaways
                
                    - L4S effectively managed congestion, maintaining low average congestion state duration (2.188 ms) despite high throughput
- The bandwidth spike to 204 Mbps at 8-9 seconds shows the network's ability to utilize available capacity
- The CE marking percentage (20.8% average) indicates active congestion signaling
- The congestion state was activated 671 times during the 10-second test, showing frequent adjustments
- Zero packet loss demonstrates reliable transmission despite varying network conditions
- The protocol shows good responsiveness to network conditions, adjusting to maintain performance
 
        
        
            About L4S (Low Latency, Low Loss, Scalable Throughput)
            L4S is an advanced congestion control approach designed to provide low latency and high throughput simultaneously, which traditional congestion control mechanisms struggle to achieve together.
            
            Key L4S Features:
            
                - ECN (Explicit Congestion Notification): Uses CE markings instead of packet drops to signal congestion
- Scalable Congestion Control: Response to congestion scales with the congestion level
- Low Queuing Delay: Maintains minimal queue sizes in network devices
- High Responsiveness: Quickly adapts to changing network conditions
In this test, we can see L4S in action through the CE markings, which show how the protocol detects congestion and responds to it while maintaining low latency. The average latency of 2.188 ms with an 87.3 Mbps throughput demonstrates L4S's ability to deliver high performance with minimal delay.