Skip to main content

CORD SDCLOUD Cluster

CORD (Central Office Re-architected as a Datacenter) re-architects the Telco Central Office as datacenter to bring in cloud-style economies of scale and agility. The CORD mainly includes the virtualization of three legacy network devices like Optical Line Termination (OLT), Customer Premises Equipment (CPE), and Broadband Network Gateways (BNG). CORD is configured by XOS in such a way that it treats Openstack as control plane controller and ONOS as dataplane controller.

In this setup, CORD-xos is used as the SDN network controller to facilitate networking. The below sections explain the cord-xos cluster configuration with SD (software-defined) cloud which includes the following:

  • Node Configuration
  • OpenStack Cluster details

Node Configuration Details

The following diagram indicates the contrail node configuration details.
img-28

Components

  • Ethernet Interface – eth0
  • openstack-xos-ctrl (x1)
  • onos-xos-ctrl (x 3)
  • openstack-xos-cp (x 2 no’s)
  • OVS (Open vSwitch x 2) – ‘br-int’ and ‘br-ex’
  • Internal Connectivity
  • External Connectivity

Connection Details

  • All the nodes in cluster are connected with a single ‘eth0’ interface.
  • There are two compute nodes ‘openstack-xos-cp1’ and ‘openstack-xos-cp2’and each have two OVS switches ‘br-int’ and ‘br-ex’.
  • A vxlan tunnel ‘vxlan0’ connects the two compute nodes providing L2 connectivity between user VMs in compute nodes.
  • The ‘service-lb1’ node has ‘subbr’ bridge for external connectivity for user VMs.
  • ‘vxlan+42’ is connected between ‘subbr’ of ‘service-lb1’ node and ‘br-ex’ in openstack-xos-cp1 node.
  • ‘vxlan+43’ is connected between ‘subbr’ of ‘service-lb1’ node and ‘br-ex’ in openstack-odl-cp2 node.
  • ‘subbr’ is connected to ‘eth0’ through which it can access internet.

OpenStack Cluster details

The section consists of Openstack cluster details, which includes the following:

  • Nova
  • Neutron
  • Keystone
  • Glance
  • Cinder
  • Opencontrail
  • Layer2/Layer3
  • External Access
  • Support Services

Nova

Nova provides compute services in openstack. Nova OpenStack Compute service is used for hosting and managing cloud computing systems. Nova’s messaging architecture and all of its components can typically be run on several servers. This architecture allows the components to communicate through a message queue. Deferred objects are used to avoid blocking while a component waits in the message queue for a response. Nova and its associated components have a centralized SQL-based database.

The following diagram indicates the Nova OpenStack Cluster configuration.

img1

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
nova-apiopenstack-ctrl1877518775
nova-computeopenstack-ctrl1877418774
  • Nova openstack can be accessed through the ‘service-lb1’ which acts as a load balancer front-end and then through an apache server in reverse-proxy configuration.
  • Nova has various components like nova-scheduler, nova-conductor, nova-cert and nova-consoleauth. These components communicate with each other through a messaging server, ZeroMq. Nova maintains its database in MySql.
  • Nova compute is deployed in the contrail-cp1 and contrail-cp2 nodes which is responsible for bringing up the user VMs with the help of QEMU hypervisor.
  • The vRouter module in openstack-ctrl1 node communicates to the vRouter module present in ‘contrail-cp1’ and ‘contrail-cp2’ nodes in XMPP.
  • Nova services can be directly accessed through horizon UI running in service-lb1 or the python-client in openstack-ctrl1 node.

Neutron

Neutron provides networking service in the openstack between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

img2

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
neutron-serveropenstack-ctrl1969619696
  • Neutron can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
  • Neutron provides networking service in the openstack in high availability cluster mode in contrail-ctrl1, contrail-ctrl2, contrail-ctrl3 nodes.
  • Metadata agent is deployed on openstack-ctrl1 node and communicates with neutron server through the ‘RabbitMq’ messaging server.
  • Neutron communicates with Contrail-api module running in all three contrail-ctrl nodes to provide networking.
  • Opencontrail maintain its database in ‘Cassandra’.