Gigabit performance on Apalis iMX6Q

We have done some ethernet performance tests on the Apalis iMX6Q/Ixora Carrier Board. Connecting the Ixora Board to a Gigabit Switch we can see a huge difference of performance between the transmitting and the receiving side. We used iperf for transmit test:

On the host:

iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.99.145 port 5001 connected with 192.168.99.81 port 44828
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec  51.2 MBytes   430 Mbits/sec
[  4]  1.0- 2.0 sec  51.9 MBytes   435 Mbits/sec
[  4]  2.0- 3.0 sec  51.8 MBytes   435 Mbits/sec
[  4]  3.0- 4.0 sec  51.5 MBytes   432 Mbits/sec
[  4]  4.0- 5.0 sec  52.0 MBytes   436 Mbits/sec
[  4]  5.0- 6.0 sec  51.9 MBytes   436 Mbits/sec
[  4]  6.0- 7.0 sec  51.9 MBytes   436 Mbits/sec
[  4]  7.0- 8.0 sec  51.9 MBytes   436 Mbits/sec
[  4]  8.0- 9.0 sec  51.9 MBytes   436 Mbits/sec
[  4]  9.0-10.0 sec  51.9 MBytes   436 Mbits/sec
[  4]  0.0-10.0 sec   519 MBytes   435 Mbits/sec

On the iMX6Q:

iperf -c 192.168.99.145
------------------------------------------------------------
Client connecting to 192.168.99.145, TCP port 5001
TCP window size: 43.8 KByte (default)
------------------------------------------------------------
[  3] local 192.168.99.81 port 44828 connected with 192.168.99.145 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   519 MBytes   435 Mbits/sec

I know that due to Errata ERR004512 of the iMX6Q the overall speed is limited to 470 Mbit/s. So the above results seem to be ok. But if we do the same test on the receiving side the results are:

On the host:

iperf -c 192.168.99.81
------------------------------------------------------------
Client connecting to 192.168.99.81, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.99.145 port 55250 connected with 192.168.99.81 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   124 MBytes   104 Mbits/sec

on the iMX6Q:

iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.99.81 port 5001 connected with 192.168.99.145 port 55250
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec  12.5 MBytes   105 Mbits/sec
[  4]  1.0- 2.0 sec  12.5 MBytes   104 Mbits/sec
[  4]  2.0- 3.0 sec  12.3 MBytes   104 Mbits/sec
[  4]  3.0- 4.0 sec  12.4 MBytes   104 Mbits/sec
[  4]  4.0- 5.0 sec  12.5 MBytes   105 Mbits/sec
[  4]  5.0- 6.0 sec  12.3 MBytes   103 Mbits/sec
[  4]  6.0- 7.0 sec  12.3 MBytes   104 Mbits/sec
[  4]  7.0- 8.0 sec  12.4 MBytes   104 Mbits/sec
[  4]  8.0- 9.0 sec  12.4 MBytes   104 Mbits/sec
[  4]  9.0-10.0 sec  12.5 MBytes   105 Mbits/sec
[  4]  0.0-10.0 sec   124 MBytes   104 Mbits/sec

After a lot of searching on the web we found that enabling pause frames should improve this:

ethtool -A eth0 rx on tx on
iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.99.81 port 5001 connected with 192.168.99.145 port 55294
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec  16.8 MBytes   141 Mbits/sec
[  4]  1.0- 2.0 sec  18.5 MBytes   156 Mbits/sec
[  4]  2.0- 3.0 sec  18.3 MBytes   154 Mbits/sec
[  4]  3.0- 4.0 sec  18.1 MBytes   152 Mbits/sec
[  4]  4.0- 5.0 sec  18.4 MBytes   154 Mbits/sec
[  4]  5.0- 6.0 sec  18.7 MBytes   157 Mbits/sec
[  4]  6.0- 7.0 sec  18.7 MBytes   157 Mbits/sec
[  4]  7.0- 8.0 sec  18.6 MBytes   156 Mbits/sec
[  4]  8.0- 9.0 sec  18.3 MBytes   153 Mbits/sec
[  4]  9.0-10.0 sec  18.4 MBytes   154 Mbits/sec
[  4]  0.0-10.0 sec   183 MBytes   153 Mbits/sec

Now we have a receive rate of about 150Mbit/s but its still much slower than the transmit rate. And in reality things are getting even worse. We are running a program that receives HTTP-Streams from different IP cameras and the performance at a 100MBit switch is even better than at 1000MBit switches.

We tried different switches from Cisco, D-Link and Netgear.

Is there any way to improve this situation?

Hi

This is not what I’m seeing with a cheap desktop gigabit switch. E.g. I have over 400 MB/sec independent of the Apalis iMX6 playing the server or client role.

Did you configure some special low-power mode which makes the Apalis iMX6 slow?

With the cameras it could be related that for that you likely use UDP. I.e. the cameras may send more Ethernet packets than what the i.MX 6 is able to take in and thus packets get dropped.

As iperf uses TCP there is flow control enabled which will slow the sender to the usable bandwith. So it would probably be better to use iperf in a UDP configuration and see what happens.

[pc ~]$ iperf3 -c 192.168.10.48 -u -b 1000000000 -t 2
Connecting to host 192.168.10.48, port 5201
[  4] local 192.168.10.1 port 56563 connected to 192.168.10.48 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec  57.8 MBytes   485 Mbits/sec  7399  
[  4]   1.00-2.00   sec  56.1 MBytes   471 Mbits/sec  7182  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-2.00   sec   114 MBytes   478 Mbits/sec  0.118 ms  8083/14580 (55%)  
[  4] Sent 14580 datagrams

iperf Done.

root@apalis-imx6:~# iperf3 -s                                                   
-----------------------------------------------------------                     
Server listening on 5201                                                        
-----------------------------------------------------------                     
Accepted connection from 192.168.10.1, port 60038                               
[  5] local 192.168.10.48 port 5201 connected to 192.168.10.1 port 56563        
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datag
rams                                                                            
[  5]   0.00-1.02   sec  8.05 MBytes  66.5 Mbits/sec  2.053 ms  5937/6967 (85%) 
                                                                                
[  5]   1.02-2.00   sec  40.1 MBytes   341 Mbits/sec  0.112 ms  2146/7274 (30%) 
                                                                                
[  5]   2.00-2.05   sec  2.65 MBytes   426 Mbits/sec  0.118 ms  0/339 (0%)      
- - - - - - - - - - - - - - - - - - - - - - - - -                               
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datag
rams                                                                            
[  5]   0.00-2.05   sec  0.00 Bytes  0.00 bits/sec  0.118 ms  8083/14580 (55%)  
-----------------------------------------------------------                     
Server listening on 5201                                                        
-----------------------------------------------------------             

Does wireshark shed some light? E.g. lots of collisions etc…

Max

First of all we do not use UDP, all traffic is done over TCP. We have to rely on receiving all data and as you can see in your own results there were 55% packets lost. This is unusable for us. But thats not the point.

We now found the solution by using a simple unmanaged switch where we could see the 470 MBit/s on the receiving side. Unfortunately our customers in most cases use managed switches where flow control is disabled by default because enabling conflicts with QoS. By enabling flow control in these switches we can solve the problem.

Thank you very much for your help!