Skip to main content

OpenStack Cluster Details

The section consists of Openstack cluster details, which includes the following:

  • Nova
  • Neutron
  • Keystone
  • Glance
  • Cinder
  • Opendaylight
  • Layer2 / Layer 3
  • External Access
  • Support Services

Nova

Nova provides compute services in openstack. Nova OpenStack Compute service is used for hosting and managing cloud computing systems. Nova’s messaging architecture and all of its components can typically be run on several servers. This architecture allows the components to communicate through a message queue. Deferred objects are used to avoid blocking while a component waits in the message queue for a response. Nova and its associated components have a centralized SQL-based database.

The following diagram indicates the Nova OpenStack Cluster configuration.

Connection details

The following table indicates the connection details.

nova-computeopenstack-xos-ctrl1877418774

Service NameNode nameFrontend portBackend port
nova-apiopenstack-xos-ctrl1877518775
  • Nova request is initiated by XOS and comes through haproxy in service-lb1 node.
  • Nova openstack can be accessed through the ‘service-lb1’ which acts as a load balancer front-end and then through an apache server in reverse-proxy configuration.
  • Nova has various components like nova-scheduler, nova-conductor, nova-cert and nova-consoleauth. These components communicate with each other through a messaging server, ZeroMq. Nova maintains its database in MySql.
  • Nova compute is deployed in the ‘openstack-xos-cp1’ and ‘openstack-xos-cp2’ nodes which is responsible for bringing up the user VMs with the help of QEMU hypervisor.
  • Nova services can be directly accessed through horizon UI running in service-lb1 or the python-client in openstack-xos-ctrl1 node.

Neutron

Neutron provides networking service in the openstack between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

Connection details

The following table indicates the connection details.

Service NameNode nameFrontend portBackend port
neutron-serveropenstack-xos-ctrl1969619696
  • Neutron request is initiated by XOS and comes through haproxy in service-lb1 node.
  • Neutron can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
  • Neutron has various components like metadata agent, DHCP agent, Neutron server and communicates through the ‘RabbitMq’ messaging server. Neutron maintains its database in MySql server.
  • ONOS communicates with Neutron-server through a service plugin called ‘networking_onos’.
  • Keystone

    Keystone is the identity service used by OpenStack for authentication and high-level authorization. Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization. Several openstack services like glance, cinder, nova and so on will be authenticated by keystone.

    The following diagram indicates Keystone OpenStack Cluster configuration.

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    Keystone(public)openstack-xos-ctrl1500015000
    Keystone(admin)openstack-xos-ctrl135357135357
    • Keystone is an OpenStack service running in openstack-xos-ctrl1 node and can be accessed from XOS through haproxy in service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
    • Keystone database is maintained in MySql server.

    Glance

    Glance in openstack is used to provide image service and metadata service. Glance image services include discovering, registering, and retrieving the virtual machine images. Glance uses RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.

    The following diagram indicates Glance OpenStack Cluster configuration.

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    glance-apiopenstack-xos-ctrl1929219292
    glance-registryopenstack-xos-ctrl1919119191
    • The components of glance include glance-api, glance-registry and communicates with each other through messaging server and ZeroMq.
    • Glance can be accessed from XOS through haproxy in service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
    • Glance is running in openstack-xos-ctrl1 node.
    • Glance database is maintained in MySql server.

    Cinder

    Cinder refers to the block storage service for OpenStack. Cinder basically virtualizes the block storage devices and present the storage resources to end users through the use of reference implementation (LVM) which are consumed by nova in openstack.

    The following diagram indicates Cinder OpenStack Cluster configuration.

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    cinder-apiopenstack-xos-ctrl1877618776
      • Cinder is deployed in openstack-xos-ctrl1 node.
      • Cinder can be accessed from XOS through haproxy in ‘service-lb1’ node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration.
      • Users are provided with 20GB LVM as a block storage which is present on openstack-xos-ctrl1.

    ONOS

    ONOS is the SDN network controller which is responsible for providing networking in this setup. Openstack communicates with ONOS through networking_onos service plugin. The following diagram indicates ONOS Cluster configuration.

    Connection details

    The following table indicates the connection details.

    Service NameNode nameFrontend portBackend port
    ONOS-guiONOS-xos-ctrl1
    ONOS-xos-ctrl2
    ONOS-xos-ctrl3
    81818181
    ONOS-openflowONOS-xos-ctrl1
    ONOS-xos-ctrl2
    ONOS-xos-ctrl3
    6633,66536633,6653
        • To facilitate high availability, neutron server is deployed as three node cluster through onos-xos-ctrl1, onos-xos-ctrl2, and onos-xos-ctrl3.
        • The service-lb1 node in which HAPROXY is running, acts as a load balancer for the three controller nodes.
        • Among the three nodes, any one node can be configured as primary.
        • ONOS manages the OVS switches in compute nodes through OVSDB protocol and configures Openflow rules in it.
        • All network calls like, network creation, port update are generated from neutron service in openstack-xos-ctrl1 node.

    Datapath

    A datapath is a collection of functional units that perform data processing operations. The following diagram indicates Datapath network configuration in openstack.

      • As indicated, a VSG (Virtual Security Gateway) VM is deployed in ‘openstack-xos-cp1’ and ‘openstack-xos-cp2’ nodes.
      • External connectivity for subscribers is provided by VSG in VM with the implementation of DHCP server and a vRouter.
      • Subscribers are simulated as Linux Containers to have certain cTAG and sTAG to access the suitable VSG VM.
      • Each VSG VM inturn has a vlan associated to it which is mapped with certain cTAG and sTAG.
      • For every new sTAG in vOLT device, a VSG VM is created and for every cTAG in vOLT device a docker container running DHCP server is created inside the VSG VM.
      • Subscriber with suitable sTAG and cTAG can access the VSG VM and gets an IP address and then can connect to internet through vRouter in VSG VM.
      • Subscriber simulated as Linux containers are deployed in service-lb1 node over a ‘subbr’ bridge. It reaches the VSG VM in compute nodes with certain vlan id through a subscriber vxlan tunnel. Based on the vlan id respective VSG VM will respond and the subscriber container will get an IP address.