Posts

Showing posts from February, 2019

Day 43: Completion of Phase 2 - Zodiac FX connected to multiple controllers

Image
In previous article, we discussed how to tackle DoS attacks on the controller initiated by the switch. We also saw the HAProxy configuration for the same. It is time for implementation now!!! In today's article, I have also given a video on how the network will be working by the end of the setup. To ensure that the Zodiac FX switch is contacting multiple controllers in case one of them fail, we can just configure the controller IP address on the Zodiac FX switch as the load balancer IP address. When I was going through few articles yesterday, I realized that there are many sorts of load balancing mechanisms. I was trying to do the DNS load balancing mechanism thus far. But after understanding the article, I realized that a server-side load balancer would be better for my scenario. And the only change I made to the setup already built to convert it into a server-side load balancer was changing the controller IP address on switch to 10.0.0.5. Although the architecture is working...

Day 41: Detection of DoS attacks on Load Balancer using HAProxy

Image
In the previous post, we have built the basic network. Let's remember the problem statement we are working on: Securing the SDN distributed controller architecture against DoS attacks specifically - Syn-Flood and Smurf attacks. Now, we have the basic SDN conroller architecture ready. We need to now concentrate on securing this network. Have a look at the architecture we have built so far(an implementation oriented diagram): Let's start with detecting Syn-Flood attacks on the load balancer. Why the load balancer? The reason is that all packets have to go through it and if any controller is under a DoS attack, we can come to know about the same from observing packets that the load balancer has to forward to the controller pool. I have already implemented Syn-Flood attacks on the controllers and stored the tcpdump data in 2 of my files. ping.txt consists of packets generated in a normal scenario when the switch has not installed any flow tables and contacts the controller...

Day 39: Trial 5 - Completion of Phase 1 of Implementation

Image
As discussed in previous article, we shall be achieving what we wanted to on Day 36 , by manually assigning VIPs to our load balancers. Once I figure out how to use keepalived, I shall automate the process of switching between master and slave load balancer. Please refer to the video to refer to the architecture we shall be building. The configuration details are as given below: The configuration of switches, controller and hosts remain as given in previous posts. Only load balancer cum router configuration has changed from the last few posts: > sudo ip addr add 10.0.0.10/24 dev enp2s0 > sudo ip addr add 20.0.0.10/24 dev eth2 > sudo ifconfig enp2s0 10.0.0.5 netmask 255.255.255.0 > sudo ifconfig eth2 20.0.0.5 netmask 255.255.255.0 > sudo sysctl -w net.ipv4.ip_forward=1 > sudo sysctl -w net.ipv4.conf.all.send_redirects=0 On active load balancer: > arping -q -U -c 3 -I enp2s0 10.0.0.5 > arping -q -U -c 3 -I eth2 20.0.0.5 My previous post explains ...

Day 38: Networking Trivia

More often than not, I realize that I need to refresh my basic networking knowledge when faced by problems in building my current architecture. This post is meant for only that reason and also to share a few enlightening insights I got to know today. Did you know that ARP was a layer 3 protocol and not a layer 2 protocol? If no, you have been learning the wrong thing so far. ARP is indeed a layer 3 protocol. ARP protocol serves the purpose of mapping the IP address to the device’s MAC address. Hence, it needs to know IP mapping of the systems as well. How would a layer 2 protocol possibly know about IP address and device mapping? Thus ARP protocol which resolves the mapping between IP and MAC is a layer 3 protocol. How to build a basic network with routers, switches and hosts? We have worked on this subject over the last few posts. If you have been following the blog, you would know that we have established the network and ran specific commands to achieve a similar network. We...

Day 36: Keeping it Alived

Image
In the previous posts we had discussed the implementation architecture and also built it to a fairly good extent. Now, it is time to consider what would happen if the load balancer faces a single point of failure. We need to have a standby load balancer that can become active once the master load balancer is down. As explained yesterday, we shall be looking into Keepalived for the same. Today, I started to implement a new version of the architecture keeping in mind the introduction of a new system to the architecture which would act as the standby load balancer. So, the new architecture I started implementing today is something like this: These were the diffculties I faced with the above architecture: The controller needs to be configured with one static route to go to the Router subnet. This is possible only by giving a the next hop as the load balancer that is active at that instant. Since it is a static route, the configuration was not possible As a consequence to the...

Day 35: Implementation of Architecture Phase 3 - Adding Backup mechanisms

Image
In the previous post we had discussed few problems we were facing with the working of the architecture. To be precise, it was regarding why the switch was not accepting packets from the backup controller even after traffic redirection from the load balancer. Well, since we have configured the controller IP address on the switch already, the switch expects to receive no packets from controllers bearing any other IP address. So, how do we deal with this problem? We can introduce a NAT functionality on the load balancer that changes the IP address when the packets go from the switch subnet to the controller pool subnet. This way, even though the request may not be answered by a controller of a specific IP address, it can pose like it has that IP address. We shall be taking this up for implementation next week. Now, we shall first have to concentrate on securing our load balancer with a standby system that can act as backup when the load balancer goes down. There is something called ...

Day 33: Implementation of the Architecture Phase 2 - Load Balancing

Image
Today, we shall get into the implementation details of the architecture that was explained in the last article. The load balancer code is as follows: frontend ft_web     bind 10.0.1.2:6633      default_backend bk_web backend bk_web     balance source#for IP Affinity     server c1 10.0.0.7:6633 check     server c2 10.0.0.8:6633 check     server c3 10.0.0.9:6633 check     server c0 10.0.0.6:6633 check backup In the above code, we are asking the load balancer to listen to the interface 10.0.0.2 which belongs to the same machine as that of the load balancer. We are also specifying on which port listening should happen. Since OpenFlow protocol standard is to communicate over port 6633, we have mentioned the same. We have mentioned the default backend servers that the switches can access - c1, c2 and c3. c0 is also a backend server that acts as a backup controller in case one or many of them fail. The bac...

Day 32: Implementation of Architecture Phase 1

Image
After committing some mistakes, I have finally conquered a few of them by achieving some functionality in building the proposed architecture today! So, let's look into how to build a basic SDN netwoek where my Zodiac FX switches are connected to a network of Controllers. The architecture being built, looks something like this: There is only one change from the previous proposed architecture, the router functionality is integrated with the load balancer on the same system. Let's take a look at the IP addresses of the systems: H1: 192.168.1.10/24 H2: 192.168.2.10/24 H3: 192.168.3.10/24 For easier understanding of the system, I've introduced an extra host on the Subnet 1 connected to my S1 Zodiac FX switch, let's call it H4. H4: 192.168.1.20/24 S1: 10.0.1.3/24 Controller of S1: 10.0.0.7 Gateway for S1: 10.0.1.2 S2: 10.0.1.4/24 Controller of S2: 10.0.0.8 Gateway for S2: 10.0.1.2 S3: 10.0.1.5/24 Controller of S3: 10.0....

Day 30: Making new mistakes!!!

It is Day 30 already!!! But I'm still making mistakes, but more importantly learning from them! So today, I started to implement the HAProxy load balancer as we had discussed in yesterday's post. As stated in Day 26's post, we needed 8 hosts and connected each of them as mentioned in that day's post. While I was doing this, I came across many errors. Few gateway errors, few ICMP not reachable and Host unreachable errors. So to simplify the architecture, I introduced a D-Link switch between all the Zodiac FX switches and the load balancer. But this caused a problem earlier when all my switches were connected to a common D-Link switch and to each other since it formed a loop. So this time, I ensured that the loop doesn't occur by removing the connection between the switches (will let you know even if this approach fails at a later stage). Well....why did I do that? Since all my Zodiac FX switches are configured on the same subnet, I do not require a differernt ...

Day 29: HAProxy Implementation Details

Image
HAProxy is a load balancer that can do both HTTP load balancing and TCP load balancing. We shall be using it for TCP load balancing as discussed in the previous post. In this post, we shall look into few implementation details of the load balancer and start off with how to install it: For installation, follow this procedure: sudo apt show haproxy sudo add-apt-repository ppa:vbernat/haproxy-1.7 sudo apt update sudo apt install -y haproxy The above commands will help you install the software. Now, its time to program what we need. There are three programmable components of HAProxy - ACL, Frontend and Backend. The below diagram would give a better understanding of the same.   Now, we need to configure the load balancer to act as an L4 load balancer. To do the same, we need to add code to an already existing configuration file. We can achieve the same through the below command and code: sudo nano /etc/haproxy/haproxy.cfg A file would be displayed, append the below code t...

Day 28: How to Load Balance?

In Day 26's post , we discussed the architecture we are going to build. The first component that we need up and running for the entire network to work is the load balancer. Right now, the connections are made in such a way that my Zodiac FX switches are connected to the load balancer alone. The load balancer in-turn is connected to the network of distributed controllers.Without the load balancer, the switches cannot access or communicate with their respective controllers. Let us firstly understand what are the properties of the load balancer that we need to be concentrating on. The load balancer is primarily forwarding OpenFlow packets from the switch to controller and vice versa. But as we have studies earlier, OpenFlow is an application layer protocol encapsulated in a TCP packet. Thus we need to redirect this TCP packet from the load balancer. Thus we shall be building an L4 load balancer that acts at the TCP level. The load balancer needs to act differently in different scen...

Day 27: Using Kafka to stream real time network data

Hey guys! So today we will explore how Kafka can be used to stream real time network data. We have learnt what Kafka is all about in the previous posts. Today, we will do some hands-on To quickly summarize Kafka architecture: Kafka messages are organized into topics. If you wish to send a message you send it to a specific topic and if you wish to read a message you read it from a specific topic. A consumer pulls messages off of a Kafka topic while producers push messages into a Kafka topic. Lastly, Kafka, as a distributed system, runs in a cluster. Each node in the cluster is called a Kafka broker. To capture network traffic, we will use tcpdump. This utility is extremely helpful  because of the wide variety of options it provides. To install kafka on my system, I followed this tutorial . One of the important things to consider in this application is scalability. What will happen to your model if there is a huuuge amount of traffic suddenly flowing in? How will it react...

Day 26 part 1: "Redirecting" back in the right direction

Image
Welcome back!!! Sorry for not updating for the past few days. Days 23, 24 and 25 were spent in introspection, reading up on how to build Ryu applications and how to achieve the proposed architecture. To summarize the mistakes done so far, and where we should rectify ourselves: The proposed architecture in the previous post was wrong in one place. There is a D-Link switch that connected all the switches to the controller. This is catastrophic. I shall explain why. Although my switches are designed to act as routers, they still exchange ARP, ICMP and few other packets with the each other and the controller. When ARP packets are sent, they get stuck in a loop. Thus discovering MAC addresses of the neighboring switch also leads to an infinite loop. The loop is formed by Zodiac FX switches being connected to a common D-Link switch. I had committed a similar mistake previously when my Zodiac FX switches were acting as layer-2 switches. I assumed that since they are layer-3 now, the loo...

Day 22: An alternate architecture for implementation

Image
Looking at the second strategy proposed in Day 20's article , we shall explore the implementation details of the same. The below figures give a glimpse of the implementation specific details of the architecture, its features and few limitations or further clarifications needed. The above diagram depicts the block diagram to explain how the architecture can be built. Since the number of ports that all machines I'm using have is 1, I have built the architecture accordingly. The switch 1 and switch 2 could be the same switch even. They are shown separately for better presentation purposes. From previous experimentation, I have found that the controller and the connected switch must be in the same subnet, else switch remains unreachable from the controller. This is the reason I have assigned the same subnet to switches, LB and all the controllers - both hierarchy 1 and hierarchy 2. Since the Load balancer also runs controller, we could as well make it the m...

Day 21: Cloud strategy

Image
In this post, I shall be sharing my thought process on the cloud strategy that we could adopt to implement our distributed hierarchical SDN controller architecture. The block diagram would summarize the proposed way to build it: (It is still very much under development) As we can see, I have proposed to shift the whole controller architecture to the cloud. Reason? This way of implementation addresses many issues that we have stated in the previous article. Replacement of the master controller with the backup controller is easier since an instance can already be maintained. We only need to migrate VMs and take care of instance IP addresse and port numbers. The similar thing would apply to the controllers that we can see on the right hand side of the diagram. Implementation specific details like creating instances, assigning IP addresses, take-over mechanisms, synchronization logic and security algorithms are yet to be dealt with. Some time was also spent in critical thin...