When we attend trade shows, like the IBC Show on the horizon, our FileCatalyst World Speed Chart always garners attention and starts conversations at our booth. The question people always ask about it is, “How?” And I can’t blame them. To take a file that would usually take five hours to transfer, and do it in 8.4 seconds is quite a feat. So, to demystify our World Speed Chart, I thought I would help answer this “how” question by highlighting the ways we achieve the numbers in the chart above.
There’s Nothing Wrong with Your Network – it’s Your Protocol
Before we get into it, we need to understand why large files can take a long time to transfer. Traditional file transfer protocols aren’t built from the ground up; they are built over top of other long-established networking protocols. There are some common standards such as the File Transfer Protocol (FTP), which utilize these long-established protocols, including TCP and IP, to move data between two endpoints.
When moving data across high-speed and high latency networks, it’s the TCP protocol, not the network per say, that causes transfers to slow down, stall, or fail altogether.
What is TCP?
TCP, or the Transmission Control Protocol, is one of the underlying protocols that helps run the internet as we know it. Its purpose is to ensure that data arrives at its destination in one piece, even if that data is chopped up into multiple packets – which TCP does during a transfer. The principal roles TCP takes care of is packet retransmission (if a packet gets lost along the way), packet reordering (data should be reassembled the same way it was sent out), and congestion control (always checking to ensure the network isn’t flooded, and to slow down the transfer if it is).
These features are so important that most common protocols used today (FTP, HTTP) utilize TCP as a core building block for network application development. And for the most part, it works very well. The protocol, however, was invented in the 1970s and has remained relatively unchanged since then. As data across a myriad of industries (Media & Broadcast, Live Sports, Natural Resources, Government, IT, and Healthcare to name a few) continues to grow into terabyte and petabyte scales, TCP is no longer suitable for large file transfers.
Latency & TCP
TCP uses “two-way” communication to transfer data across a network – packets and acknowledgments (which we will call ACKs). Packets are the broken down bits of data that make up the file, and acknowledgments are “statements” from the receiving end confirming that the last packet was received. These packets and ACKs have to be sent in sequential order, and the sender can’t send another packet of data until it receives the acknowledgment for the previous packet.
The time spent sending a packet and receiving an ACK is measured as Round-Trip Time (RTT), or latency. All this time spent waiting instead of transferring data is one of the reasons TCP can be painfully slow for large file transfers. This isn’t so bad when sending files between computers that are in close proximity to each other since ACKs spend less time in flight. But as the geographic distance increases, so does the RTT. The slower the ACK reception is, the more time spent waiting instead of sending.
Packet Loss & TCP
When routers face large amounts of data flow and unable to handle the volume of flowing packets, the router begins to suffer from congestion, leading to packets that may become lost in transmission.
This isn’t the only form of packet loss either. Physical structures in the “route” of a wireless connection can cause interference that can also lead to packet loss. TCP can’t distinguish between packet loss caused by network congestion and packet loss caused by interference in wireless or satellite networks.
TCP will cut the TCP Window (the end-to-end-bandwidth multiplied by the RTT) in half when packet loss is detected. This can be an over-aggressive approach when inherent interference is present, and it can lead to detrimental effects concerning transfer speed.
How Does FileCatalyst Do It?
So how does FileCatalyst overcome latency and packet loss, while maintaining high speeds? By using the User Datagram Protocol (UDP). UDP draws more performance than TCP since its a “connectionless” protocol. This means that UDP doesn’t depend on sequenced acknowledgments, resulting in faster transfers on high latency networks. While this provides a speed boost, it also has the potential to become unreliable when any form of packet loss is present on the link.
FileCatalyst, however, has developed a patented data transfer algorithm that uses the speed of UDP, while adding a layer of reliability and security (SSL certificates, AES encryption, and MD5 checksums, and more) that guarantees file delivery.
The FileCatalyst protocol is not only faster and more reliable than TCP, it also offers compression, congestion control, and other features missing in traditional protocols. FileCatalyst’s protocol can also transmit data at precise rates, an essential element for many real-time applications.
Summing it Up
So, to answer the question in a sentence: we achieve award-winning fast file transfer speeds by taking the performance of UDP and the reliability of TCP to create our own protocol – the best of both worlds! If you would like more in-depth details on our TCP/FTP and our protocol, check out our white papers that provide a more in-depth look.