Day 20: Implementation of architecture using Zodiac FX switches 101

Up until now, we have seen how to emulate the SDN controller architecture on software switches on Mininet. Now, we shall look at if we can implement the same architecture on hardware switches namely - Zodiac FX. Zodiac FX switches are one of the most commonly used SDN enabled switches in research test beds. Refer to the Zodiac FX user guide to get an idea how it looks, what ports it can be connected to and how to set it up.

To connect the Zodiac FX switch to your computer, follow the below instructions: 
  1. Connect the USB cable on one side to Zodiac FX and on the other side to the computer
  2. Install Putty on your machine
    > sudo apt install putty
  3. Open a terminal and see on what port your Zodiac FX switch is connected
    > dmesg | grep tty
    The above command displays all devices connected to your tty ports. By default the Zodiac FX connects to the ttyACM0 port on a Linux based machine
  4. Open putty
    > sudo putty
  5. A window is displayed, 
    1. select 'Serial' in connection type
    2. let the speed remain 9600
    3. change Serial line to /dev/ttyAACM0 (if that is the port your ACM device is connected to)
  6. A putty window is displayed. If it appears blank, press Enter and CLI should be displayed. If this does not happen, your connections are not proper and you'll have to recheck the same.
  7. In the command line interface, type 'help' to discover what sort of commands can be given
  8. Firstly, we shall be setting the IP address and other configurations :
    1. > config
    2. > show config
    3. > set ip-address 10.1.0.10
    4. > set netmask 255.255.255.0 (this is equivalent to /24)
    5. > set of-controller 10.1.0.5
    6. > save
    7. > restart
  9. Open putty with the same configuration again and check if the changes are reflected.
  10. Now open a terminal and set the IP address of your system
    > sudo ip addr add 10.1.0.6/24 dev enp2s0 (you can as well make this the controller by giving the IP address 10.1.0.5)
  11. Connect an ethernet cable on one end to the 4th port of the Zodiac FX and to the computer on the other end
  12. Observe that the controller and the switch are on the same subnet, this is important for them to communicate. If this condition is not satisfied, they can't be reachable from each other.
  13. Let's test the communication between the switch and controller by pinging the switch from the controller or the machine to which the switch is connected:
    > ping 10.1.0.10
  14. This ping test should pass, which means everything is fine and you can continue ahead. Else, check the connections, IP addresses.
Now, we are ready to connect hosts and controller and build a simple network. Refer to the below video to build a simple SDN network with a single controller and single switch.



But before doing this, let us analyze if we can build the required architecture through Zodiac FX switches given that it can be connected to only 1 controller. In the simulation phase we had the liberty to connect each switch to more than 1 controller. But in Zodiac FX switches, you can observe that we are manually setting the IP address and port of the controller that the switch needs to contact. Let us again go through the reqirement of the architecture we are planning to build:
  • each switch needs to be connected to 3 controllers (1 hierarchy 2 controller, 2 hierarchy 1 controllers)
  • the two hierarchy 1 controllers should be connected to atleast 3 switches, every other controller - 3 at least
  • some sort of communication needs to exist between controllers for synchronization
Since Zodiac FX switches cannot be connected to more than 1 controller, we need a workaround to build the required architecture.

Alternative 1:

The switch is connected to any one hierarchy 2 controller and this controller takes care of sending all OpenFlow messages from switches to the 2 master controllers in hierarchy 1. The challenges we wouls face in this approach is that the switch migration cannot take place. Even if the controller goes down, the switch has to be manually configured to connect to another controller

Alternative 2: 

The switch can be connected to any one of the hierarchy 1 master controllers. The master controller acts as a load balancer. It receives OpenFlow messages from all switches and forwards it to the respective controller. The controller it forwards to can keep changing dynamically without manual configuaration on the switch. In both these architectures, the single point of failure cannot be accounted for without manual configuration of switches. I have not been able to think of a workaround for the same that can completely take care of singe point of failure. The one disadvantage of alternative 2 over alternative 1 is that, the master controllers in hierarchy must possess many interfaces - 6 at a minimum. 3 interfaces to connect to 3 hierarchy 2 controllers and the other 3 interfaces to connect to 3 switches. To ensure scalability, we have to atleast consider 3 controllers in hierarchy 2 and each connected to atleast 1 switch. 

Refer to the below diagram to understand pictorially how I want to achieve Alternative 2:



Figure: Alternative 2 Block Diagram

Single Point of Failure Possible Solutions

  • If we run all our controllers on the cloud, we can dynamically create and destroy controllers that can operate on the same IP address and port
  • We can treat each controller as a load balancer that runs on 6633 port. It's only job is to forward OpenFlow messages from port 6633 to dynamically created ports in the same IP address. The dynamically created ports deal with the topology, what flow tables are to be installed and so on. The port 6633 can take care of the mapping between various ports as they keep dynamically changing. The single point of failure that occurs due to DoS attacks can be addressed to a great extent by blocking traffic from a particular source or to a particular destination through a thresholding mechanism. But the single point failures occurring due to other factors like hardware issues remain unresolved.
  • We can limit the architecture to emulation
The first and second points proposed above need further reading. We shall discuss these approaches and their feasibility in tomorrow's article. If you find any loopholes in the approaches, feel free to write down a comment. 

Author: Shravanya
Co-Author: Swati

Comments

  1. Some great approaches lined down! Kudos!
    I have a conceptual doubt in the second approach(load balancer). 'It's only job is to forward OpenFlow messages from port 6633 to dynamically created ports in the same IP address' - Here I didn't quite understand why the controller's need to be on the same IP, I am assuming here that all controllers are running on the same host, that's why they are sharing the same IP. Is this what you meant?

    ReplyDelete
    Replies
    1. Yes. I was thinking of an approach where the Ryu controller keeps running on 6633 and acts a load balaner. We can have other ports on the system active to reply to the OpenFlow messages. In case of any port becoming blocked due to too much traffic, we can activate another port to take care of some part of the traffic. This is very similar to how we would do things in cloud - replace any VM with another once it goes down. The proposal is still in the ideation phase though.

      Delete
    2. The approach 2 would be desirable. You need to be clear on how an LB works. LB need not be a controller at all. It job is to just forward the packet to required controller. This is the typical job of a LB. There is no SPOF when LB has its own standby. The two LBs would be monitoring each other and the active (master) would take the IP address of LB. You can look at 'keepalived' for this purpose. This is how typically LB with failover support would work. Thus, switch will be configured with just one IP (or domain name). The LB needs to ensure that all the packets/TCP Connections reaches the same backend controller. Whether the backend controller is master controller (hierarchy 1) or workhorse controller (hierarchy 2), it does not matter to switch. Typically such IP Address e.g. 10.1.0.5 would be configured using Virtual IP (VIP) mechanism.

      Delete
    3. I shall look into 'keepalived' and how to realise the architecture in a detailed manner. We shall work on this from Monday and keep you updated. Right now I'm not too clear on how the architecture should work, hopefully after going through few sources I will be in a position to start implementing.

      Delete

Post a Comment

Popular posts from this blog

Day 12: Master Slave SDN Controller Architecture

Day 50: Tcpreplay and tcpliveplay approach

Day 10: Mininet Simulation of a basic distributed SDN controller architeture