Why TCP/IP is No Longer Good Enough to Secure and Ensure Realtime IP Networking

Why TCP/IP is No Longer Good Enough to Secure and Ensure Realtime IP Networking

By Special Guest
Shrey Fadia, Special Correspondent
  |  September 10, 2018

As the world moves rapidly forward with programmable networking, one of the fundamental protocols the public Internet relies on every millisecond of everyday may be dying out as “code” gets better and hyper-secure, hyper-fast private networks as overlays to the public Internet mature.

Change takes time, and nearly four years ago, in a little-known paper published by researchers at Aalborg University in Denmark, in association with MIT (News - Alert) and Caltech, reported that mathematical equations can make Internet communication via computer, mobile phone or satellite many times faster and more secure than they were when the study was being conducted.

They illustrated their work with a four-minute long mobile video, with data from the lab showing that video being downloaded five times faster using “programmable” than traditional packet management methods.

The video also streamed without interruptions. In comparison, the original video faltered 13 times.

"This has the potential to change the entire market,” said Frank Fitzek, Professor in the Department of Electronic Systems and one of the pioneers in the development of network coding. “In experiments with our network coding of Internet traffic, equipment manufacturers experienced speeds that are five to ten times faster than usual. And this technology can be used in satellite communication, mobile communication and regular Internet communication from computers."

"With the old systems you would send packet 1, packet 2, packet 3 and so on,” Fitzek continued, in an interview with Science Daily. “We replace that with a mathematical equation. We don't send packets. We send a mathematical equation. You can compare it with cars on the road. Now we can do without red lights. We can send cars into the intersection from all directions without their having to stop for each other. This means that traffic flows much faster.”

Programmable networking has come into its own, today, with Internet overlay networking techniques that, like the approach explained above, treat packets differently.

NetFoundry, for example, a Tata Communications (News - Alert) start-up, has been rolling out private Enterprise quality networks that connect multiple clouds, applications, IoT and Industrial IoT systems, and more, leveraging technology developed by Dispersive Technologies, which holds multiple patents utilizing mathematical algorithms to deliver what they refer to as Split-Session Multipath, which encrypts each stream using very fast, industry-standard ephemeral keys (known only to the source and destination) and sends each stream on a different path through the use of waypoints called “deflects”.

This approach secures the stream from man-in-the-middle attackers who would have to know each key, each path, and how to reassemble the traffic.

While Dispersive’s technology was developed in parallel with security-related projects for government agencies with extremely high requirements for “unbreakable” networking, their technology, like that demonstrated by the academics at Aalborg, MIT and Caltech, also improves performance enabling faster transmission of massive files.

Faster Internet with no more TCP/IP is already happening, as the mathematicians, data scientists and analysts are funded by entrepreneurs who thrive on developing and commercializing disruptive Internet technologies, even as the world also moves rapidly toward multi-access edge computing, also designed to deliver ultra-low latency performance across devices, including Internet of Things which has security requirements unlike we’ve seen before.

Recently, Dispersive’s CTO published a semi-technical blog  on the deficiencies of TCP/IP, which sends data down the same path, as compared to programmable approaches, with network coding routing the data over multiple, always-differing paths, making it far more secure and performant.

“Times Have Changed,” Rick Conklin writes. “TCP Hasn’t.”

Conklin explains that Transmission Control Protocol was designed and implemented decades ago for an Internet that was exponentially smaller and less complex than today’s massive public network.

While he admires this “brilliantly designed and implemented protocol that has served us well,” he says the design principals established back in the 1980’s may have outlived their usefulness.

“TCP congestion control assumes that any error – e.g., a lost ACK – is caused by congestion,” Conklin writes. “This allows TCP to be fair when sharing a critical resource.  Decades ago, that made sense; today, it doesn’t. Back then, the critical resource was limited bandwidth in the core of ARPANET.  Today, the core of the network is no longer a critical resource. It offers ludicrous capacity and speed. There is no need for TCP to be polite: bandwidth is plentiful.”

Conklin also tackles the first and last mile, where the constraints of bandwidth most often happen.

“Let’s use DOCSIS 3.1 as an example.  In a typical system, the Cable Modem Termination System transmits to homes in a single channel while homes transmit to the CMTS using a Time Division Multiple Access strategy on multiple channels. The key here is that you have your own dedicated upstream time slot.  Your neighbor also has a dedicated time slot.

When your legacy TCP connection detects a lost packet, it assumes congestion and backs off in the name of fairness. That doesn't make much sense.”

Conklin lists three reasons for lost packets and congestion:

  • An error occurred in the core of the network. If this happens, the protocol should retransmit immediately.
  • The congestion or error occurred in the upstream direction from your house to the CMTS. The pinch point – where your congestion can occur – is limited to your upstream time slot.
  • The congestion or error occurred in the downstream direction from the CMTS to your house. In this case, you are sharing downstream bandwidth with your neighbors.

While this example is for cable (as opposed to a fiber line), but the same premise holds true for other access technologies.

CMTS is a cable modem termination system.

“Over time, any protocol must determine the effective throughput rate,” Conlkin writes. “However, a modern design should not assume errors are due to congestion over a shared resource.  Instead, the protocol should determine the effective throughput, quickly assuming any drop outside of the effective throughput is due to an error and not congestion.”

More detail is included in Conklin’s post.

So where are we today with the transformation to programmable networking?

Mass movements by Communications Service Providers (CSPs) forcing vendors to move to virtualized network functions is one step.

Those same CSPs (and their challengers) are also moving away from traditional circuit-driven private networking, like expensive and contractually heavy MPLS solutions to Software Defined Networking, and SD-WANS.

While TCP/IP is still widely used throughout the world to provide network communications, composed of four layers that work together, we are continuing to see companies like NetFoundry, Dispersive and others change the way they manage data transmission on all four layers.

When a user wants to transfer data across networks, the data is passed from the highest layer through intermediate layers to the lowest layer, with each layer adding information.  At each layer, the logical units are typically composed of a header and a payload.  The payload consists of the information passed down from the previous layer, while the header contains layer-specific information including addresses.

At the application layer, the payload is the actual application data. 

The lowest layer sends the accumulated data through the physical network, and then the data is passed up through the layers to its destination. 

That’s a lot of processing.

Add in security, with transport layer protocols including SSL for HTTP-based applications, and SMTP, Point-of-Presence (PoP), Internet Message Access Protocol (IMAP), and File Transfer Protocol (FTP), networking has tremendous methods for control.

And while TCP built on top of IP is used to deliver the bulk of internet traffic today, there are many signs and now many implementations proving a better way which is already improving security, without compromising performance, with software that scales – and is notably decoupled from specialized equipment and able to run on any commercial-off-the-shelf server or racks of servers.

Regardless of traditional TCP or advanced programmable networking using mathematics and algorithms, developers and enterprises still need the basics:

  • Data transport they can rely on
  • Data integrity
  • A unified way to deal with and resolve failures
  • Security
  • Performance
  • Reliability
  • Economic affordability

Are we ready to go from basic – to better? That’s an equation now being advanced from the halls of academics to the massive reconfiguration of the public Internet and the ability for service providers, enterprises, governments, universities and others to spin up private networks just like they “spin up” Virtual Machines – using software from start to finish – all the way to the first and last millimeter.

There may be no turning back a year or two from now.




Edited by Ken Briodagh
Get stories like this delivered straight to your inbox. [Free eNews Subscription]