Day 33: Implementation of the Architecture Phase 2 - Load Balancing

Today, we shall get into the implementation details of the architecture that was explained in the last article. The load balancer code is as follows:

frontend ft_web
    bind 10.0.1.2:6633 
    default_backend bk_web

backend bk_web
    balance source#for IP Affinity
    server c1 10.0.0.7:6633 check
    server c2 10.0.0.8:6633 check
    server c3 10.0.0.9:6633 check
    server c0 10.0.0.6:6633 check backup

In the above code, we are asking the load balancer to listen to the interface 10.0.0.2 which belongs to the same machine as that of the load balancer. We are also specifying on which port listening should happen. Since OpenFlow protocol standard is to communicate over port 6633, we have mentioned the same. We have mentioned the default backend servers that the switches can access - c1, c2 and c3. c0 is also a backend server that acts as a backup controller in case one or many of them fail. The backup mechanism isn't working in the current configuration. We shall be figuring out how to make it work in the following articles.

The reason it may not be working:
Unlike the normal scenario where load balancer is used, we are using it for only OpenFlow communication between Zodiac FX switches and Ryu controllers. There is a difference, since in a normal scenario, we would be load balancing the traffic generated towards a particular website say. The DNS can be programmed in a way to return the IP addresses of different servers each time. So the client tries to access different server IP addresses. But in the Zodiac FX scenario, we have already configured a static IP address of the controller. Thus, even if the load balancer redirects the packets to another backend controller, until spoofing of the IP address isn't done, it may not work.

I have stated my assumptions in the above paragraph. I may be wrong. I am hoping to get to know the answer to it soon. So in case you happen to know, drop it down as a comment.

We looked at the code for haproxy load balancer. But its more important to learn debugging methodology for haproxy, since we shall be playing around with the code and trying to add more components to it in the upcoming classes. By default, haproxy was not maintaining any logs on my system. Thus I initially found it difficult to comprehend what was happenning. To break down this black-box, we shall first see how to store logs in a file so we can refer to it to confirm if everything is working fine, and if not - what is the error?

Perform below functions to log haproxy flow:

> sudo gedit /etc/rsyslog.conf

A file will open. We can observe few commented lines which are responsible for UDP and TCP syslog reception. Uncomment these lines. Save and close the file.

> sudo gedit /etc/rsyslog.d/haproxy.conf

The above file is created and opened. Type in the following 1 line command:

if($programname == 'haproxy') then -var/log/haproxy.log

Save and close the file.

> service rsyslog restart
> systemctl restart haproxy
> sudo gedit /var/log/haproxy.log

This file will give you all the logs required for debussing. New logs are appended to the file, so do not forget to refresh the file each time if you have kept it open.


Now, the most exciting part, the video of the implementation. I have just shown the setup and how it works. Since I've given all commands and instructions in the previous posts, I've not included that in the video. But you will get a fair idea as to how to setup the network and what can be expected as an output or an indication that the network that's built is actually working.


Refer to previous and next articles here.

Author: Shravanya
Co-Author: Swati

Comments

Popular posts from this blog

Day 12: Master Slave SDN Controller Architecture

Day 50: Tcpreplay and tcpliveplay approach

Day 10: Mininet Simulation of a basic distributed SDN controller architeture