Skip to main content

CONTRAIL SDCLOUD Cluster

OpenContrail is an open source network virtualization platform for the Software Defined Networking (SDN). The OpenContrail System consists of the following two main components:

  • OpenContrail Controller – The OpenContrail Controller is a logically centralized but physically distributed Software Defined Networking (SDN) controller that is responsible for providing the management, control, and analytics functions of the virtualized network.
  • OpenContrail vRouter – The OpenContrail vRouter is a forwarding plane that runs in the hypervisor of a virtualized server. It extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the virtualized servers

In this setup, opencontrail is used as the SDN network controller to facilitate networking. The below sections explain the contrail cluster configuration with SD (software-defined) cloud which includes the following:

  • Node Configuration
  • OpenStack Cluster details

Node Configuration Details

The following diagram indicates the contrail node configuration details.

img-21

Components

  • Ethernet Interface – eth0
  • openstack-ctrl (x1)
  • Contrail-ctrl (x 3)
  • Contrail-cp (x 2 no’s)
  • OVS (Open vSwitch x 2) – ‘br-int’ and ‘br-ex’
  • Internal Connectivity
  • External Connectivity

Connection Details

  • All the nodes in cluster are connected with a single ‘eth0’ interface.

    OpenStack Cluster details

    The section consists of Openstack cluster details, which includes the following:

    • Nova
    • Neutron
    • Keystone
    • Glance
    • Cinder
    • Opencontrail
    • Layer2/Layer3
    • External Access
    • Support Services

    Nova

    Nova provides compute services in openstack. Nova OpenStack Compute service is used for hosting and managing cloud computing systems. Nova’s messaging architecture and all of its components can typically be run on several servers. This architecture allows the components to communicate through a message queue. Deferred objects are used to avoid blocking while a component waits in the message queue for a response. Nova and its associated components have a centralized SQL-based database.

    The following diagram indicates the Nova OpenStack Cluster configuration.

    img-2

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    nova-apiopenstack-ctrl1877518774
    nova-computeopenstack-ctrl1877418775
    • Nova openstack can be accessed through the ‘service-lb1’ which acts as a load balancer front-end and then through an apache server in reverse-proxy configuration.
    • Nova has various components like nova-scheduler, nova-conductor, nova-cert and nova-consoleauth. These components communicate with each other through a messaging server, ZeroMq. Nova maintains its database in MySql.
    • Nova compute is deployed in the openstack-odl-cp1 and openstack-odl-cp2 nodes which is responsible for bringing up the user VMs with the help of QEMU hypervisor.
    • Nova services can be directly accessed through horizon UI running in service-lb1 or the python-client in openstack-ctrl1 node.

    Neutron

    Neutron provides networking service in the openstack between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

    img-12

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    neutron-serveropenstack-ctrl1969619696
    • Neutron can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
    • Neutron provides networking service in the openstack in high availability cluster mode in contrail-ctrl1, contrail-ctrl2, contrail-ctrl3 nodes.
    • Metadata agent is deployed on openstack-ctrl1 node and communicates with neutron server through the ‘RabbitMq’ messaging server.
    • Neutron communicates with Contrail-api module running in all three contrail-ctrl nodes to provide networking.
    • Opencontrail maintain its database in ‘Cassandra’.

    Keystone

    Keystone is the identity service used by OpenStack for authentication and high-level authorization. Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization. Several openstack services like glance, cinder, nova and so on will be authenticated by keystone.

    The following diagram indicates Keystone OpenStack Cluster configuration.

    img-4

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    Keystone(public)openstack-ctrl1500015000
    Keystone(admin)openstack-ctrl135357135357
    • Keystone can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
    • Keystone database is maintained in MySql server.

    Glance

    Glance in openstack is used to provide image service and metadata service. Glance image services include discovering, registering, and retrieving the virtual machine images. Glance uses RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.

    The following diagram indicates Glance OpenStack Cluster configuration.

    img-5

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    glance-apiopenstack-ctrl1929219292
    glance-registryopenstack-ctrl1919119191
    • The components of glance include glance-api, glance-registry and communicates with each other through messaging server and ZeroMq.
    • Glance can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
    • Glance is running in openstack-ctrl1 node.
    • Glance database is maintained in MySql server.

    Cinder

    Cinder refers to the block storage service for OpenStack. Cinder basically virtualizes the block storage devices and present the storage resources to end users through the use of reference implementation (LVM) which are consumed by nova in openstack.

    The following diagram indicates Cinder OpenStack Cluster configuration.

    Connection details

    The following table indicates the connection details.

    img-13

    Service NameNode nameFrontend portBackend port
    cinder-apiopenstack-odl-ctrl1877618776
    • Cinder is deployed in openstack-ctrl1 node.
    • Cinder can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
    • Users are provided with 20GB LVM as a block storage which is present on openstack-ctrl1.

    Opencontrail

    Opencontrail is the SDN network controller which is responsible for providing networking in this setup. The following diagram indicates Opencontrail Cluster configuration.

    img-22

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    contrail-apicontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    80829100
    contrail-discoverycontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    59989110
    contrail-schemacontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    808718087
    IF-MAP servercontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    844318443
    contrail-svc-monitorcontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    808818088
    RabbitMqcontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    567215672
    contrail-analytics-apiservice-lb180818081
    contrail-collectorcontrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    808618086
    contrail-control(bgp)contrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    1791179
    contrail-control(XMPP)contrail-ctrl1
    contrail-ctrl2
    contrail-ctrl3
    526915269
    contrail-vrouter-agentcontrail-cp1
    contrail-cp2
    808518085
    • To facilitate high availability, neutron server is deployed as three node cluster through contrail-ctrl1, contrail-ctrl2, and contrail-ctrl3.
    • The service-lb1 node in which HAPROXY is running, acts as a load balancer for the two contrail controller nodes.
    • Contrail api in all the three nodes communicates in active-active mode.
    • Contrail controller manages the vRouters in compute nodes through XMPP protocol and configures routes in it.
    • Any network calls like, network creation, port update will come from neutron server to contrail-api module.

    Layer 2/Layer 3

    The Layer 2 / 3 framework is a service plugin added in OpenStack and allows Neutron to simultaneously utilize the variety of networking technologies found in complex real-world data centers.

    The following diagram indicates Layer 2 / 3 framework configuration.

    Connection details

    The following table indicates the connection details.

    TypeConnection
    Virtualisation technologyMPLS GRE
    Virtual elementvRouter
    • The Layer 2 (L2) is connected across the compute nodes through ‘MPLS GRE’ connectivity.
    • The Virtual Machines (VMs) 1 and 2 are deployed by Nova through the ‘vRouter’ connectivity.
    • The primary contrail controller is configured with openflow rules in the ‘vRouter’ on how the VMs can communicate through the ‘MPLS GRE’ tunnel.
    • Distributed virtual router is enabled in Openstack which makes the compute nodes, ‘contrail-cp1’ and ‘contrail-cp2’ to act as a vRouter by itself.
    • The Layer 3 (L3) functionality is implemented by ‘vRouter’ agent in compute node.

    External Access

    The external access typically provides Internet access to individual instance through a floating IP address with required security rules. In Openstack, a public router is used as an external access and is connected to ‘openstack-ctrl1’ node which acts as an external router.

    The following diagram indicates External access configuration.

    img-26

    Connection details

    The following table indicates the connection details.

    TypeConnection
    Virtualization technologyMPLS GRE Tunnel
    External bridgevRouter
    External interfaceeth0
    • Using external network you can access the VMs through floating IPs assigned to it.
    • VMs can reach the external network through a Virtual Gateway in openstack-ctrl1 node.

    The floating IP range is defined in another private network inside the cluster. Hence IP Table rules are added in ‘openstack-ctrl1’ node for VM’s to reach the internet as indicated below:

    iptables –append FORWARD –in-interface subbr -j ACCEPT
    iptables –table nat –append POSTROUTING –out-interface eth0 -j MASQUERADE

    For Accessing the VMs spawned on the sandbox, destination NAT rule is added manually through Port forwarding to redirect all communication requests. For example, to use SSH (Secure Shell) protocol on a VM having an IP Address of 100.1.0.12, the following NAT rule is added in ‘openstack-ctrl1’ node as indicated below:
    iptables -t nat -A PREROUTING -p tcp –dport 8000 -i vhost0 -j DNAT –to 100.1.0.12:22

    And then use ssh into the VM using the public IP of ‘openstack-ctrl1’ as indicated below:
    ssh ubuntu@ -p 8000

    Support Services

    Openstack provides the following support services for user VMs:

    Metadata service – Metadata service provides information like hostname, ssh key, public ip, user data for any particular user VM.
    DHCP service – DHCP (Dynamic Host Configuration Protocol) service provides dynamic IP address to user VMs from the defined pool of IP addresses.

    The following diagram indicates openstack support services framework.

    img-27

    • In this setup the ‘openstack-ctrl1’ node hosts the Metadata and DHCP services.
    • Metadata service is directly connected in ‘openstack-ctrl1’ node and DHCP service is connected within a qdhcp namespace in ‘openstack-odl-ctrl1’ node.
    • User VMs connected in the compute nodes can access the DHCP service directly but can access the Metadata service only through via qdhcp namespace.
    • RabbitMQ is used as a messaging broker which is an intermediary for messaging.