The Software Defined Networking trend has been picking up momentum and new use-cases are continuously evolving since its initial launch. In this article, we will take a look at this evolution, from the very basics to carrier-class SDN. We will do this through the lens of fundamental SDN architectural characteristics. These basic ingredients ultimately define the type and scale of market applications, as well as the roadmap of products implementing them. Additionally, we will demonstrate how these ingredients are centered on the pivotal SDN control plane distribution.
To start, you may find it interesting that SDN for enterprises and carriers assumes a new model for networking, a model different from traditional IP. The basic IP network model consists of autonomous junctions directing packets based on address-prefix tables, a kind of location-based set of "area codes,” (e.g., 1.1.x go left, 1.2.x go right). This model represents a clear way for anyone joining the World Wide Web to send and receive data packets from anyone else. This is true, of course, as long as we know the location county.zip.town.street coordinates, or the IP address. In this model, packets find their way and zero in on their target hop by hop, source to destination.
The SDN model for networking is completely different, looking more like a programmable crossbar of sources and destinations, consumers and producers, subscribers and services or functions. Such a matrix assumes that you can physically get from any row to any column, recursively. But it also assumes that each specific "patch-paneling" is under strict programmable control for both tapping and actual connectivity. No entity can talk to any other unless explicitly provisioned to do so. This is achieved using the key SDN element – the controller software entity that allocates whole flows or network conversations. This is a significant shift from the model of the IP ant farm of packets making their way from one junction to the next.
The straight forward architecture implementation of the model outlined above has been initially defined such that a centralized controller would be setting up each flow, from every source to every destination. However, this flow setup needs to be completed through every physical hop from source to destination. This is an implementation detail resulting from the physical structure of how networks are physically scaled, where not every endpoint is directly connected to any other endpoint.
But, is this really just an implementation detail? Not exactly. It turns out that the job of the controller becomes exponentially more complex as the diameter of the network, or the average number of hops from source to destination, increases. This scaling aspect immediately identifies the first key distinction of carrier SDN – Federation. Non-carrier SDN products that stick with a centralized approach are restricted to networks with size 1 or 2 diameters, namely Spine-Leaf enterprise data centers or point-to-point site circuits. And, even in these environments, due to the physical meshing factor, scaling large enough has to assume moderate dynamics in flow allocation for centralization to work.
Most SDN architectures that are on the path to carrier-grade – or more generalized and less restricted SDN – choose federated distributed-overlay architectures. In this model, the diameter of the network is surrounded by SDN edges taking the hop-to-hop topology factor out of the scale equation. SDN edges control the mesh, allocate dynamic patch points in the flow tables, and link the “outer lay” identities while letting traditional IP bridging and routing autonomous junctions connect the underlay locations. This federated SDN architecture opens up the possibilities for carrier-scale use cases, where SDN overlays are surrounding and leveraging the carrier distribution center networks, carrier metropolitan networks, and also the national backbones that are already in place. Carriers no longer need a greenfield network in order to deploy SDN applications.\
Now that we have an SDN overlay in place, we can next look at global information propagation across the overlay network. This is now becoming the most fundamental and mission critical SDN requirement and distribution element. Why? Since we have federated SDN control at the edges of the network, how will each SDN edge node know what’s behind every other SDN edge node, or where the logical row and column identities of the model physically reside? We could keep global information centralized and have just the flow setups distributed, but that blocking point would limit the dynamics and overall flows per second, limiting the performance of the network. Because of this, most overlay architectures distribute global awareness by pre-pushing the global data records to all nodes on the edge. This information push is allowing SDN edges to know tenancy filtering and setup new flows concurrently.
This data push approach is once again a clear demarcation from traditional approaches, distancing carrier SDN and carrier use-case categorization from SDN on enterprise networks. Carrier use cases that require subscriber awareness when setting up flows cannot pre-push all subscriber information to all locations. Maintaining such a massive replication and distribution of data consistently is just not feasible. Subscriber-aware carrier SDN architectures have to allow for a non-blocking, pull and publish-subscribe method of sharing global information. This is typically done by leveraging the underlay IP network in addition to hop-to-hop transport, as well as for implementing an IP distributed non-blocking hash table or IP directory also termed Mapping.
Many carrier SDN use cases are Subscriber-aware or Content-aware; these include mobile function chaining, evolved packet core virtualization, multimedia services, content distribution, virtual customer premises equipment, virtual private networks and more. In general, most SDN use cases that connect subscriber flows to network functions require subscriber-aware lookup pull and classification, otherwise every possible service permutation is managed statically, and every single new function may double the static permutation maintenance. This would be exponentially burdensome for global carrier network management and operations.
Lastly, we look at carrier SDN solution element density and element extensibility/ programmability qualities. Just like information sharing, these qualities are a direct result of an SDN model distribution and provide distinctions for carrier-class use cases and carrier SDN applicability. As far as SDN edge density, non-carrier class SDN can potentially afford to push overlay edges all the way in to the tenant system or host. This software is for convenience only and doesn’t provide a real open flow approach.
This is a possible low-density SDN edge option mainly for hosting, since we don’t assume large scopes per software network. Rather, we expect many small enterprise tenants. Hence we do not assume massive global information sharing compared to a multi-million-subscriber database or multi-billion machine-to-machine identity base. For enterprise SDN we also don’t assume too many physical geographic locations or heavy duty SDN edge device packaging.
This of course is not the case for carrier SDN applications. For those, global information sharing is massive and the number of SDN edge nodes must be kept at hundreds to thousands, and not hundreds of thousands, and therefore the density of each SDN edge node should be quite high. Carrier SDN edge node capacity is a full rack size bandwidth-wise, and is in the thousands of flow setups and mapping lookups per second per node. We also look at millions of concurrent flows and millions of concurrent publish-subscribe states kept per node.
Similarly, an additional direct result of SDN distribution is that if SDN programmability is no longer locked inside the controller, a clear method needs to specify how distributed programmability and extensibility is delivered in each of the nodes, and how it is synced across the solution. If we refer to such distributed programmable logic as FlowHandlers and the variance of such logic as FlowHandler.Lib, then once again we see another immediate distinction of carrier SDN. While basic SDN solutions will have very limited FlowHandler.Lib, basically for handling multi-tenancy virtual L2/3, carrier use cases will a have a far more elaborate FlowHandler.Lib, with distinct flow mapping logic for protocols, SIP, SCTP, GTP, GRE, NFS, etc.
These protocols enable applications such as mobility, VoIP, VoLTE, content distribution, transcoding and header enrichment, to name a few. Carrier-class FlowHandler.Lib will also handle jitter buffers and TCP flow control when SDN overlays span large geo-distributions, and/or connect very different mediums such as WANs and RANs. Carrier FlowHandlers by law, and by default, should also be able to account for each and every flow connected over the public network. Accounting for patterns is an important public network security consideration, for instance network functions such as firewalls or tapping may be opt-in and out dynamically per carrier FlowHandler decision. Additional important functions of a carrier-class FlowHandler is the ability to apply traffic engineering and segment flow routes in the Underlay IP network, optimize long lived backup and replication flows, and load balance core-spine links and core routes.
To summarize, we touched upon the differences and distinctions of SDN in general, and when it comes to carrier SDN vs. Enterprise SDN in particular. There are many differences as far as technology and use cases, but even more key are the architectural ones: Federation, Mapping, Density, and Programmability. As we saw, most of these qualitative architectural distinctions have to do with SDN control plane distribution, the distribution model, sharing of information, density or level of distribution, differentiated programmability and carrier logic in each node. Done correctly, carrier SDN can provide both multi-billion-endpoint carrier scale and five-nines class availability for the Network Functions Virtualization (NFV) era.
Edited by Maurice Nagle