Home United States USA — software Using Ansible Validations With OpenStack (Part 1) Using Ansible Validations With OpenStack...

Using Ansible Validations With OpenStack (Part 1) Using Ansible Validations With OpenStack (Part 1)

206
0
SHARE

OpenStack Platform now ships with several scripts designed to help with validation. See them in action as the utilize Ansible for pre- and post-deployment work.
Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.
With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director .
In this three-part series you’ ll learn ways to use Ansible to perform powerful pre- and post- deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.
Ansible modules are commonly grouped into concise, targeted actions called playbooks. Playbooks allow you to create complex orchestrations using simple syntax and execute them against a targeted set of hosts. Operations use SSH which removes the need for agents or complicated client installations. Ansible is easy to learn and allows you to replace most of your existing shell loops and one-off scripts with a structured language that is extensible and reusable.
Red Hat ships a collection of pre-written Ansible playbooks to make cloud validation easier. These playbooks come from the OpenStack TripleO Validations project (upstream, GitHub) . The project was created out of a desire to share a standard set of validations for TripleO-based OpenStack installs. Since most operators already have many of their own infrastructure tests, sharing them with the community in a uniform way was the next logical step.
We now have a dynamically generated inventory as required, including groups, using the director’s standard controller and compute node deployment roles .
We’ re now ready to run the validations!

Continue reading...