Skip to main content

ONOS SDCLOUD Cluster

The ONOS (Open Network Operating System) is a software defined network operating system with scalability, high availability, and high performance. ONOS is based on solid architecture and is easily configurable for setting up cloud services.

In this setup, ONOS is used as the SDN network controller to facilitate networking. The below sections explain the ONOS cluster configuration with SD (software-defined) cloud which includes the following:

Node Configuration Details

The following diagram indicates the ONOS node configuration details.

img-18

Components

  • Ethernet Interface – eth0
  • openstack-onos-ctrl (x1)
  • ONOS-ctrl (x 3)
  • openstack-onos-cp (x 2 no’s)
  • OVS (Open vSwitch x 2) – ‘br-int’ and ‘br-ex’
  • Internal Connectivity
  • External Connectivity

Connection Details

  • All the nodes in cluster are connected with a single ‘eth0’ interface.
  • There are two compute nodes ‘openstack-onos-cp1’ and ‘openstack-onos-cp2’and each have two OVS switches ‘br-int’ and ‘br-ex’.
  • A vxlan tunnel ‘vxlan0’ connects the two compute nodes providing L2 connectivity between user VMs in compute nodes.
  • The ‘Openstack-onos-ctrl1’ node has ‘subbr’ bridge for external connectivity for user VMs.
  • OVS (Open vSwitch x 2) – ‘br-int’ and ‘br-ex’
  • ‘vxlan+42’ is connected between ‘subbr’ and ‘br-ex’ in openstack-onos-cp1 nodes and ‘vxlan+43’ is connected between ‘subbr’ and ‘br-ex’ in openstack-onos-cp2 nodes.
  • ‘subbr’ is connected to ‘eth0’ through which it can access internet.

OpenStack Cluster details

The section consists of Openstack cluster details, which includes the following:

  • Nova
  • Neutron
  • Keystone
  • Glance
  • Cinder
  • ONOS
  • Layer2/Layer3
  • External Access
  • Support Services

Nova

Nova provides compute services in openstack. Nova OpenStack Compute service is used for hosting and managing cloud computing systems. Nova’s messaging architecture and all of its components can typically be run on several servers. This architecture allows the components to communicate through a message queue. Deferred objects are used to avoid blocking while a component waits in the message queue for a response. Nova and its associated components have a centralized SQL-based database.

The following diagram indicates the Nova OpenStack Cluster configuration.

img-2

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
nova-apiopenstack-odl-ctrl1877518774
nova-computeopenstack-onos-ctrl1877418775
  • Nova openstack can be accessed through the ‘service-lb1’ which acts as a load balancer front-end and then through an apache server in reverse-proxy configuration.
  • Nova has various components like nova-scheduler, nova-conductor, nova-cert and nova-consoleauth. These components communicate with each other through a messaging server, ZeroMq. Nova maintains its database in MySql.
  • Nova compute is deployed in the openstack-odl-cp1 and openstack-odl-cp2 nodes which is responsible for bringing up the user VMs with the help of QEMU hypervisor.
  • Nova services can be directly accessed through horizon UI running in service-lb1 or the python-client in openstack-ctrl1 node.

Neutron

Neutron provides networking service in the openstack between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

img-12

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
neutron-serveropenstack-odl-ctrl1969619696
  • In this setup OpenDaylight is used for networking in openstack.
  • Neutron can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
  • Neutron has various components like metadata agent, DHCP agent, Neutron server and communicates through the ‘RabbitMq’ messaging server. Neutron maintains its database in MySql server.

Keystone

Keystone is the identity service used by OpenStack for authentication and high-level authorization. Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization. Several openstack services like glance, cinder, nova and so on will be authenticated by keystone.

The following diagram indicates Keystone OpenStack Cluster configuration.

img-4

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
Keystone(public)openstack-odl-ctrl1500015000
Keystone(admin)openstack-odl-ctrl135357135357
  • Keystone can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
  • Keystone database is maintained in MySql server.

Glance

Glance in openstack is used to provide image service and metadata service. Glance image services include discovering, registering, and retrieving the virtual machine images. Glance uses RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.

The following diagram indicates Glance OpenStack Cluster configuration.

img-5

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
glance-apiopenstack-odl-ctrl1929219292
glance-registryopenstack-odl-ctrl1919119191
  • The components of glance include glance-api, glance-registry and communicates with each other through messaging server and ZeroMq.
  • Glance can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
  • Glance is running in neutron-openstack-ctrl1 node.
  • Glance database is maintained in MySql server.

Cinder

Cinder refers to the block storage service for OpenStack. Cinder basically virtualizes the block storage devices and present the storage resources to end users through the use of reference implementation (LVM) which are consumed by nova in openstack.

The following diagram indicates Cinder OpenStack Cluster configuration.

Connection details

The following table indicates the connection details.

img-13

Service NameNode nameFrontend portBackend port
cinder-apiopenstack-odl-ctrl1877618776
  • Cinder is deployed in openstack-odl-ctrl1 node.
  • Cinder can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
  • Users are provided with 20GB LVM as a block storage which is present on openstack-odl-ctrl1.

ONOS

ONOS is the SDN network controller which is responsible for providing networking in this setup. Openstack communicates with ONOS through networking_onos service plugin.

The following diagram indicates ONOS Cluster configuration.

img-20

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
ONOS-guiONOS-ctrl1
ONOS-ctrl2
ONOS-ctrl3
81818181
ONOS-openflowONOS-ctrl1
ONOS-ctrl2
ONOS-ctrl3
6633,6653,66406633,6653,6640
  • To facilitate high availability, ONOS is deployed as three node cluster through onos-ctrl1, onos-ctrl2, and onos-ctrl3.
  • Out of these three nodes any node can act as ‘Primary’ based on the configuration.
  • Service-lb1 node has HAPROXY running and acts as a load balancer for the onos controller nodes.
  • ONOS manages the OVS switches in compute nodes through OVSDB protocol and has Openflow rules configured in it.
  • Any or all network calls such as network creation, port update and so on will come from neutron service in openstack-onos-ctrl1 node.

Layer 2/Layer 3

The Layer 2 / 3 framework is a service plugin added in OpenStack and allows Neutron to simultaneously utilize the variety of networking technologies found in complex real-world data centers.

The following diagram indicates Layer 2 / 3 framework configuration.

Connection details

The following table indicates the connection details.

img-21

TypeConnection
Virtualisation technologyVXLAN
Virtual switchbr-int (OVS switch)
  • The Layer 2 (L2) is connected across the compute nodes through ‘vxlan tunnel’ connectivity.
  • The Virtual Machines (VMs) 1 and 2 are deployed by Nova through the ‘br-int’ connectivity.
  • The primary onos controller is configured with openflow rules in the ‘br-int’ on how the VMs can communicate through the vxlan tunnel.
  • Distributed virtual router is enabled in Openstack which makes the compute nodes, openstack-onos-cp1 and openstack-onos-cp2 to act as a vRouter by itself.
  • The Layer 3 (L3) functionality is implemented in ONOS using ‘networking_onos’ service plugin.
  • The L3 connectivity of the user VMs is provided by the openflow rules which are configured in ‘br-int’ in ONOS controller.

External Access

The external access typically provides Internet access to individual instance through a floating IP address with required security rules. In Openstack, a public router is used as an external access and is connected to ‘openstack-onos-ctrl1’ node which acts as an external router.

The following diagram indicates External access configuration.

Connection details

The following table indicates the connection details.

img-22

TypeConnection
Virtualization technologyVXLAN
External bridgesubbr
External interfaceeth0
  • Using external network you can access the VMs through floating IPs assigned to it.
  • Since this is an external flat network in our topology, the floating IPs are assigned from the pool of external flat network to the ‘br0’ bridge in ‘openstack-onos-ctrl1’ node.
  • The ‘br0’ bridge in ‘openstack-onos-ctrl1’ node is connected to ‘eth0’ interface through which internet access is provided.
  • VMs are connected to this network through a private flat network tunnel between ‘br-ex’ on the compute node and ‘br0’ on openstack-onos-ctrl1 node.
  • ARP (Address Resolution Protocol) and NAT (Network Address Translation) rules for the floating IPs are configured in the ‘br-int’ by ONOS.

The floating IP range is defined in another private network inside the cluster. Hence IP Table rules are added in ‘Openstack-onos-ctrl1’ node for VM’s to reach the internet as indicated below:

iptables –append FORWARD –in-interface subbr -j ACCEPT
iptables –table nat –append POSTROUTING –out-interface eth0 -j MASQUERADE

For Accessing the VMs spawned on the sandbox, destination NAT rule is added manually through Port forwarding to redirect all communication requests. For example, to use SSH (Secure Shell) protocol on a VM having an IP Address of 100.1.0.12, the following NAT rule is added in ‘openstack-onos-ctrl1’ node as indicated below:
iptables -t nat -A PREROUTING -p tcp –dport 8000 -i vhost0 -j DNAT –to 100.1.0.12:22

And then use ssh into the VM using the public IP of ‘openstack-onos-ctrl1’ as indicated below:
ssh ubuntu@ -p 8000

Support Services

Openstack provides the following support services for user VMs:

  • Metadata service – Metadata service provides information like hostname, ssh key, public ip, user data for any particular user VM.
  • DHCP service – DHCP (Dynamic Host Configuration Protocol) service provides dynamic IP address to user VMs from the defined pool of IP addresses.

The following diagram indicates openstack support services framework.
img-23

  • In this setup the ‘neutron-openstack-ctrl1’ node hosts the Metadata and DHCP services.
  • Metadata service is directly connected in ‘neutron-openstack-ctrl1’ node and DHCP service is connected within a qdhcp namespace in ‘neutron-openstack-ctrl1’ node.
  • User VMs connected in the compute nodes can access the DHCP service directly but can access the Metadata service only through via qdhcp namespace.
  • RabbitMQ is used as a messaging broker which is an intermediary for messaging.