OpenStack in 2019: Steady Progress and New Directions
by Curtis Collicutt on May 28, 2019
Predictions for Edge Computing in 2019
by Curtis Collicutt on March 12, 2019
Kubernetes Persistent Volumes with NetApp Trident
by Curtis Collicutt on February 11, 2019
Layer 1 SFC with BigSwitch
by Curtis Collicutt on February 4, 2019
100,000 Carrier Clouds Coming Online
by Curtis Collicutt on January 28, 2019
In previous posts, we have tried to define what Service Function Chaining (SFC) is as well as how one can “fake” SFC at the layer 3 level. However, based on our definition of SFC we want to be moving packets from one virtual port pair to another, essentially at the first layer of the OSI model, mimicking, usually through virtualization and Software Defined Networking (SDN), a series of physical servers directly connected by a chain. This is what one would arguably call “pure” SFC.
However, while our destination is “pure” SFC, we have a few stops to make along the way. The first stop was layer 3, and our second stop is SFC on a single, physical switch.
SFC almost always requires a Software Defined Networking (SDN) system. Depending on your definition of SDN, OpenStack Neutron plus the networking-sfc plugin can achieve SFC. However, plain Neutron in OpenStack currently cannot. Nor can most standard network operating systems. So, practically speaking, SFC requires SDN. In this discussion, we’ll talk about BigSwitch’s SDN as one way to accomplish “pure” SFC.
Using the BigSwitch SDN system, specifically the Big Monitoring Fabric Inline product, we can easily create service chains, but instead of virtual ports in an Infrastructure as a Service (IaaS) system such as OpenStack, we will build them on physical ports on a single physical switch.
This single physical switch is a whitebox Edgecore 5712, and it’s running the BigSwitch Switch Light operating system. A BigSwitch Controller manages the switch. With these pieces, we have a small but powerful SDN implementation.
We can use the BigSwitch REST API available on the controller to create chains, services, and instances, and automatically (and near instantaneously) swap functions in and out without packet loss. We have written some Python code that executes the BigSwitch APIs to perform the swap using a command line application. We called this command line application simply sfc (which you will see in the demo video).
In our lab, we built a BigSwitch Controller, a physical switch, and a single physical hypervisor based on KVM and libvirt. The hypervisor, which we called sfc-libvirt, has several physical interfaces connected to the switch and hosts the four virtual machines we are using:
sfc-libvirt$ virsh list Id Name State ---------------------------------------------------- 2 pa-fw running 3 client running 4 server running 5 linux-fw running
The chain will consist of three instances: the client node, a firewall, and the server node.
Via SFC, we can swap out the Linux firewall with the Palo Alto firewall. The Linux firewall is just a bridge, with two of its ports making up a port pair in the chain. The Palo Alto firewall is configured to be a bridge as well, but in Palo Alto nomenclature this is called a “virtual wire” so all they are doing is moving packets from one interface to another as part of the chain.
Essentially we can easily, and instantaneously, swap out members of the chain without loss of packets or increased latency: no truck rolls, no physical network switching, just cheap and fast API calls.
Below is demonstration video where we have a simple chain of client, firewall, and server. We’ll start a ping running from the client to the server, which make up the start and end of the chain, and then using our in-house developed command line application swap out a Linux bridge/firewall and a Palo Alto firewall and show when traffic flows over the Linux firewall and when it flows over the Palo Alto firewall.
The possibilities provided by SFC are endless, from easily upgraded customer services to self-healing systems and more. The power of SFC is readily apparent even when limited to physical ports and a single physical switch. The ability to automate these chains extends the capability even more.
Imagine what we can do when there is a large IaaS system with hundreds of hypervisors and complex chains existing across compute nodes can be built, potentially even across geographies. Indeed, not all products and services will require “pure” SFC, but once an organization can implement port to port SFC, other models become more comfortable to design and implement.