Tip When Using Wireshark's RTT Graph (by Tony Fortunato)
Why Packetheads Need Not Fear Cisco ACI (by Mike Canney)

Slow Writes, Slower Reads (by Bob Brownell)

Slow Writes, Slower Reads

A question was posed to ask.wireshark by wdurand in September:

https://ask.wireshark.org/questions/55972/slow-writes-even-slower-reads-spanning-wan-to-netapp

It was accompanied by four capture files that recorded the writing and reading of a one-gigabyte file and later a 200-Mbyte file to and from a NetApp file server, using SMB2 over a WAN. Why was there such variation in the respective transfer speeds, well below what the bandwidth and window sizes should allow? In the absence of a published solution we present our analysis that finds different causes for the slow transfers. The analysis is interesting because it highlights a little-known characteristic of the Cisco ASA.

Here we present an analysis of the slow 200-MB file-read operation, illustrated with our analysis tool NetData. The solution depends not only on an understanding of packet contents, but also, critically, on an accurate grasp of the relative times of many packets, and it is almost impossible to achieve that by looking at numbers in columns.

One

All pictures - click on for bigger view!

This NetData chart of the whole operation plots blue markers for the response times of every SMB2 transaction, and a black graph of the number of transactions in progress (changing as every transaction starts and finishes). The pink graph plots the position of the file pointer extracted from the read-request headers, and the green graph plots the payload throughput rate (also from the headers) that eventually reached 22 Mbytes/sec but averaged only 8 Mbytes/sec. It is a crowded chart but we want to correlate all these different aspects of the system’s behaviour. We see now that the large response times, ranging up to 2.5 seconds, tell us little – they are a consequence of up to 128 requests queuing in the server, and the waiting time is naturally longer when throughput is smaller.

The purple boxes enclosing some markers indicate which transactions might have been affected by a network abnormality while in progress. Because there were large numbers of concurrent transactions, one abnormality such as a retransmission could affect many transactions, and we will get a better idea when we examine packet-timing and data-sequence charts. However, throughput dropped severely at every appearance of an abnormality.

Two

This data-sequence chart plots the TCP sliding window (against the left-hand scale) as it slides across more than 210 Mbytes of TCP data for the file and SMB headers. The green graph at the bottom of the chart plots the size of the send window – the congestion-avoidance window, CWND – changing with every data and ack packet. Although the traffic was captured at the client, NetData by default plots this graph as the sender would view the window, to give us insight into the rules by which the sender controlled the flow. Throughput is plotted as a dashed green line and its triangular shape shows that it is proportional to the size of the send window – the height of the green graph at the bottom. Although throughput reached a peak of 180 Mbps it was severely throttled by abnormal network events that reduced the send-window size to zero or near zero. It is those events that we must examine.

Three

This data-sequence chart focuses on a typical event with many retransmissions and duplicate selective acks (D-SACKs). Each server packet is marked by a short vertical strip, plotted against the time-of-day scale at the bottom and the sequence scale on the left. The end of each packet strip is marked by a horizontal tick running left from the top of the strip. On the left of the chart is a very rapid burst of 79 data packets indicated by a stack of 79 black strips. It filled the send window allowed at that time. After a round-trip time (RTT) of 22.9 ms or more the server saw all of those packets acknowledged (the blue line marks the window-edge) and transmitted a new burst of only 39 packets. The send-window size had been halved as a proper congestion-avoidance measure because something in the network had just prompted the server to retransmit more than 20 packets (pink vertical strips, circled in red) of the first burst.

Why were packets retransmitted? We know the retransmissions were unnecessary for several reasons. We see that all the data in the first burst was acknowledged promptly (NetData has shifted the ack line to the time the acks would have been seen by the sender), and the client responded to every retransmission by issuing a D-SACK. Those D-SACKs (circled in orange) only compounded the performance problem because the NetApp server didn’t seem to understand them; it counted them as duplicate acks and because there were more than three (NetData has plotted the duplicate-ack count in the green circle) they prompted yet another retransmission (pink strip in the green circle) and another halving of the send window.

Retransmissions are often the consequence of ack-packet loss, but not in these circumstances. There were a dozen or more acks and even if all were lost – unlikely in itself – the retransmissions would have arrived after a timeout of 200 ms or more. Rather, the retransmissions were issued after waiting less than an RTT. We know the acks were seen by the server because many of the steps in the blue ack line have a little pink strip sitting on them (circled in blue). The arrival of each ack prompted yet another retransmission, in a behaviour that is quite consistent with the server regarding the acks as partial acks. However, this behaviour suggests another weakness in the NetApp protocol – it shouldn’t expect a partial ack until a minimum RTT after the first retransmission.

Four

Every burst of retransmissions always included packets that began an original burst, and if the last packet of the preceding burst had not been acknowledged, as in the above chart, that packet also was retransmitted. These are significant clues as we will see later.

An explanation for this slow file transfer depends entirely on an understanding of the mechanism that produced the redundant retransmissions. A capture from the other end of the network would help, but in its absence we will refer instead to a pair of captures from another network that displayed the same symptoms and allowed NetData charts to expose the retransmission mechanism.

Please look for our second article, posting soon on www.lovemytool.com, Cisco ASA behaviour.

A super story on Wireshark and NetApp by Mike Brown - " Wireshark cannot tell a lie!"

https://virtuallymikebrown.com/2015/02/13/wireshark-cannot-tell-a-lie-a-tale-of-netapp-discovery/

 

Comments