Minimal OPNFV Brahmaputra Configuration using Fuel

The OPNFV project is about to launch its Brahmaputra release, its second in a year. (OPNFV releases are named after rivers…) The installation documentation for a Fuel-based installation states that it supports a 3-node, non-redundant configuration. What it doesn’t state what roles those nodes should be assigned.

The OPNFV community is extremely helpful. I emailed out a question about this to the community:

Hi, folks!

We’re deploying OPNFV for the first time. We’re using the latest Fuel 7 ISO for brahmaputra from artifacts.opnfv.org. We are interested in deploying the minimum configuration, 3 nodes according to the installation document (http://artifacts.opnfv.org/fuel/docs/docs/installation-instruction.html). The Fuel server is up, 3 nodes have PXE booted and are ready to have assignments. Network configuration tests correctly using the Fuel GUI. Each node has 16 cores, 16 GB RAM, and 0.5 TB disk.

What are the recommended assignments for those 3 nodes? If we want to use ODL, do we need a 4th node?

and received the following reply with 10 minutes!

Since you are using 3 nodes only, then you will not have enough for a full HA Controller setup.  In such situtaions in our lab, we usually go with something like the following (of course, this depends on the Openstack feature set that you are looking for).

1 Controller with CEPH
2 Computes with CEPH
–        Essentially we set Storage (if CEPH or LVM) on all three nodes (so that you have maximum amount of storage available for Glance,Cinder, etc)

In the current build you are using, ODL will be deployed (assuming you are doing your deployment right “out of the box” with no changes) on the first (primary) controller.  Currently (in that build) I don’t believe if you deploy a HA Controller setup (3 Controllers) that ODL is deployed in the same mode – meaning that for ODL, whether you have 1 or 3 controllers, the plugin will deploy this on your primary controller.

I configured my 3 nodes as 1 controller and 2 computes, each with CEPH (storage). The deploy was successful!

Advertisements