Testing Ansible Roles with Molecule

A while ago I was drafting an official Ansible Role for Icinga 2. From my previous experience with writing Puppet modules and helping shape Chef cookbooks I already knew approximately how the Ansible role should look like and what functionality should be included. At some point I came across the question how testing should be done during development?

From all the options available, two stuck out:

KitchenCI with an Ansible provisioner
Molecule

I used TestKitchen in the past to test Chef cookbooks, so this was an advantage. However, after some research I decided to go with Molecule, a tool designed especially for Ansible roles and playbooks. It seems to me to be the better option to use a tool that explicitly aims to test Ansible stuff, rather then using TestKitchen which usually is used for Chef cookbooks but has an extension to support Ansible provisioning.

Molecule support multiple virtualization providers, including Vagrant, Docker, EC2, GCE, Azure, LXC, LXD, and OpenStack. Additionally, various test frameworks are supported to run syntax checks, linting, unit- and integration tests.

Getting started with Molecule

Molecule is written in Python and the only supported installation method is Pip. I won’t go through all requirements, since there’s a great installation documentation available. Besides the requirements, you basically just install molecule via Pip

pip install molecule

After molecule is successfully installed, the next step is to initialise a new Ansible role:

molecule init role -d docker -r ansible-myrole

This will not only create files and directories needed for testing, but the whole Ansible role tree, including all directories and files to get started with a new role. I choose to use Docker as a virtualisation driver. With the init command you can also set the verifier to be used for integration tests. The default is testinfra and I stick with that. Other options are goss and inspec.

Molecule uses Ansible to provision the containers for testing. It creates automatically playbooks to prepare, create and delete those containers. One special playbook is created to actually run your role. The main configuration file, molecule.yml includes some general options, such as what linter to use and on which platform to test. By default cents:7 is the only platform used to test the role. Platforms can be added to run the tests on multiple operating systems and versions.

I mentioned before that I use testinfra to write the integration tests. With the init command Molecule creates a default scenario, which we can use for the first steps. For example, checking if a package is installed and a the service is running the code would look like this:

import os

import testinfra.utils.ansible_runner

testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
    os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')

def test_icinga2_is_installed(host):
    i2_package = host.package("icinga2")
    
    assert i2_package.is_installed


def test_icinga2_running_and_enabled(host):
    i2_service = host.service("icinga2")
    
    assert i2_service.is_running
    assert i2_service.is_enabled

Running tests

There are multiple ways to run your tests, but there’s one command that does everything automatically. It runs listing tests, provisions containers, runs your playbook, runs integration tests, shows failures and destroys the containers at the end.

molecule test

With other commands you can do all of these steps one by one. Of course there’s much more possible with molecule, such as creating different scenarios and using multiple instances to do more complex testing. The Molecule documentation is well written and has some examples on what you can do more.

Blerim Sheqa

Autor: Blerim Sheqa

Blerim ist seit 2013 bei NETWAYS und seitdem schon viel in der Firma rum gekommen. Neben dem Support und diversen internen Projekten hat er auch im Team Infrastruktur tatkräftig mitgewirkt. Hin und wieder lässt er sich auch den ein oder anderen Consulting Termin nicht entgehen. Mittlerweile kümmert sich Blerim hauptsächlich im Icinga Umfeld um die technischen Partner und deren Integrationen in Verbindung mit Icinga 2.

#OSBConf 2018 sponsors & media partners

Open Source Backup Conference | September, 26 | Cologne

Today we want to shout out a big thank you to those who support our move to gather all the influential representatives, decision makers and developers from the international Open Source community! We look forward to one inspiring day full of backup thanks to the OBSConf sponsors Bareos, dass IT, NETWAYS and Univention. We are also very glad to have IT Administrator, Linux Magazin and Admin Magazine on board as media partners.

Thank you so much!

More info on osbconf.org

#OSBConf

Keya Kher

Autor: Keya Kher

Keya hat im Oktober 2017 ihr Praktikum im Marketing bei NETWAYS gestartet. Seitdem lernt sie fleißig deutsch und fühlt sich bei NETWAYS schon jetzt pudelwohl. Sie hat viele Erfahrungen im Social Media Marketing und ist auf dem Weg, ein Grafikdesign-Profi zu werden. Wenn sie sich gerade nicht kreativ auslebt, entdeckt sie die Stadt oder schmökert im ein oder anderen Buch. Ihr Favorit ist “The Shiva Trilogy”.

OSBConf program is online

Open Source Backup Conference | Cologne | September 26, 2018

Be part of one inspiring day full of Backup!

OSBConf provides you with all you need to know about Open Source software solutions for data backup: Best practices, success stories, technical know-how and top-level discussions. This year again we have received excellent papers from experts and researchers.  After a thorough review the program is fixed. Our selection for OSBConf 2018 includes high-level speakers such as:

Gratien D’haese (IT3 Consultants): „Relax-and-Recover Automated Testing with Bareos“
Daniel Neuberger (NETWAYS): „Restore and Backup Elasticsearch Indices“
Maik Außendorf & Philipp Storz (Bareos): „What’s new in Bareos 18“
Toshaan Bharvani (VanTosh): „Schroedingers Backup“

To see the whole conference program visit: osbconf.org/schedule

After each talk a Q&A session is scheduled. OSBConf provides an ideal opportunity for backup specialists to meet, exchange thoughts and establish new connections with fellow community members.

Register now!
Backup your seat!

osbconf.org/tickets

Julia Hornung

Autor: Julia Hornung

Julia ist seit Juni 2018 Mitglied der NETWAYS-Crew. Vor ihrer Zeit in unserem Marketing Team hat sie als Journalistin und Produktionsassistentin in der freien Theaterszene gearbeitet. Ihre Leidenschaft gilt gutem Storytelling. Privat widmet sie sich dem Klettern und ihrer Ausbildung zur Yogalehrerin.

The Icinga Config Compiler: An Overview

The Icinga configuration format was designed to be easy to use for novice users while at the same time providing more advanced features to expert users. At first glance it might look like a simple key-value configuration format:

object Host "example.icinga.com" {
	import "generic-host"

	address = "203.0.113.17"
	address6 = "2001:db8::17"
}

However, it features quite a bit of functionality that elevates it to the level of scripting languages: variables, functions, control flow (if, for, while) and a whole lot more.

Icinga’s scripting language is used in several places:

  • configuration files
  • API filter expressions
  • auto-generated files used for keeping track of modified attributes

In this post I’d like to show how some of the config machinery works internally.

The vast majority of the config compiler is contained in the lib/config directory. It weighs in at about 4.5% of the entire code base (3059 out of 68851 lines of code as of July 2018). The compiler is made up of three major parts:

Lexer

The lexer (lib/config/config_lexer.ll, based on flex) takes the configuration source code in text form and breaks it up into tokens. In doing so the lexer recognizes all the basic building blocks that make up the language:

  • keywords (e.g. “object”, “for”, “break”) and operators (e.g. >, ==, in)
  • identifiers
  • literals (numbers, strings, booleans)
  • comments

However, it has no understanding of how these tokens fit together syntactically. For that it forwards them to the parser. All in all the lexer is actually quite boring.

Parser/AST

The parser (lib/config/config_parser.yy, based on GNU Bison) takes the tokens from the lexer and tries to figure out whether they represent a valid program. In order to do so it has production rules which define the language’s syntax. As an example here’s one of those rules for “for” loops:

| T_FOR '(' identifier T_FOLLOWS identifier T_IN rterm ')'
{
        BeginFlowControlBlock(context, FlowControlContinue | FlowControlBreak, true);
}
rterm_scope_require_side_effect
{
        EndFlowControlBlock(context);

        $$ = new ForExpression(*$3, *$5, std::unique_ptr($7), std::unique_ptr($10), @$);
        delete $3;
        delete $5;
}

Here’s a list of some of the terms used in the production rule example:

Symbol Description
T_FOR Literal text “for”.
identifier A valid identifier
T_FOLLOWS Literal text “=>”.
T_IN Literal text “in”.
BeginFlowControlBlock, EndFlowControlBlock These functions enable the use of certain flow control statements which would otherwise not be allowed in code blocks. In this case the “continue” and “break” keywords can be used in the loop’s body.
rterm_scope_require_side_effect A code block for which Icinga can’t prove that the last statement doesn’t modify the program state.

An example for a side-effect-free code block would be { 3 } because Icinga can prove that executing its last statement has no effect.

After matching the lexer tokens against its production rules the parser continues by constructing an abstract syntax tree (AST). The AST is an executable representation of the script’s code. Each node in the tree corresponds to an operation which Icinga can perform (e.g. “multiply two numbers”, “create an object”, “retrieve a variable’s value”). Here’s an example of an AST for the expression “2 + 3 * 5”:

Note how the parser supports operator precedence by placing the AddExpression AST node on top of the MultiplyExpression node.

Icinga’s AST supports a total of 52 different AST node types. Here are some of the more interesting ones:

Node Type Description
ArrayExpression An array definition, e.g. [ varName, "3", true ]. Its interior values are also AST nodes.
BreakpointExpression Spawns the script debugger console if Icinga is started with the -X command-line option.
ImportExpression Corresponds to the import keyword which can be used to import another object or template.
LogicalOrExpression Implements the || operator. This is one of the AST node types which don’t necessarily evaluate all of its interior AST nodes, e.g. for true || func() the function call never happens.
SetExpression This is essentially the = operator in its various forms, e.g. host_name = "localhost"

On their own the AST nodes just describe the semantical structure of the script. They don’t actually do anything in terms of performing any real actions.

VM

Icinga contains a virtual machine (in the language sense) for executing AST expressions (mostly lib/config/expression.cpp and lib/config/vmops.hpp). Given a reference to an AST node – such as root node from the example in the previous section – it attempts to evaluate that node. What that means exactly depends on the kind of AST node:

The LiteralExpression class represents bare values (e.g. strings, numbers and boolean literals). Evaluating a LiteralExpression AST node merely yields the value that is stored inside of that node, i.e. no calculation of any kind takes place. On the other hand, the AddExpression and MultiplyExpression AST nodes each have references to two other AST nodes. These are the operands which are used when asking an AddExpression or MultiplyExpression AST node for “their” value.

Some AST nodes require additional context in order to run. For example a script function needs a way to access its arguments. The VM provides these – and a way to store local variables for the duration of the script’s execution – through an instance of the StackFrame (lib/base/scriptframe.hpp) class.

Future Considerations

All in all the Icinga scripting language is actually fairly simple – at least when compared to other more sophisticated scripting engines like V8. In particular Icinga does not implement any kind of optimization.

A first step would be to get rid of the AST and implement a bytecode interpreter. This would most likely result in a significant performance boost – because it allows us to use the CPU cache much more effectively than with thousands, if not hundreds of thousands AST nodes scattered around the address space. It would also decrease memory usage both directly and indirectly by avoiding memory fragmentation.

However, for now the config compiler seems to be doing its job just fine and is probably one of the most solid parts of the Icinga codebase.

Gunnar Beutner

Autor: Gunnar Beutner

Vor seinem Eintritt bei NETWAYS arbeitete Gunnar bei einem großen deutschen Hostingprovider, wo er bereits viel Erfahrung in der Softwareentwicklung für das Servermanagement sammeln konnte. Bei uns kümmert er sich vor allem um verschiedene Kundenprojekte, aber auch eigene Tools wie inGraph oder Icinga2.

Let’s Encrypt HTTP Proxy: Another approach

Yes, we all know that providing microservices with Docker is a very wicked thing. Easy to use and of course there is a Docker image available for almost every application you need. The next step to master is how we can expose this service to the public. It seems that this is where most people struggle. Browsing the web (generally GitHub) for HTTP proxies you’ll find an incredible number of images, people build to fit into their environments. Especially when it comes to implement a Let’s Encrypt / SNI encryption service. Because it raises the same questions every time: Is this really the definitely right way to do this? Wrap custom API’s around conventional products like web servers and proxies, inject megabytes of JSON (or YAML or TOML) through environment variables and build scrips to convert this into the product specific configuration language? Always my bad conscience tapped on the door while I did every time.

Some weeks ago, I stumble upon Træfik which is obviously not a new Tool album but a HTTP proxy server which has everything a highly dynamic Docker platform needs to expose its services and includes Let’s Encrypt silently – Such a thing doesn’t exist you say?

A brief summary:

Træfik is a single binary daemon, written in Go, lightweight and can be used in virtually any modern environment. Configuration is done by choosing a backend you have. This could be an orchestrator like Swarm or Kubernetes but you can also use a more “open” approach, like etcd, REST API’s or file backend (backends can be mixed of course). For example, if you are using plain Docker, or Docker Compose, Træfik uses Docker object labels to configures services. A simple configuration looks like this:

[docker]
endpoint = "unix:///var/run/docker.sock"
# endpoint = "tcp://127.0.0.1:2375"
domain = "docker.localhost"
watch = true

Træfik constantly watch for changes in your running Docker container and automatically adds backends to its configuration. Docker container itself only needs labels like this (configured as Docker Compose in this example):

whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"

The clue is, that you can configure everything you’ll need that is often pretty complex in conventional products. This is for Example multiple domains, headers for API’s, redirects, permissions, container which exposes multiple ports and interfaces and so on.

How you succeed with the configuration can be validated in a frontend which is included in Træfik.

However, the best thing everybody was waiting is the seamless Let’s Encrypt integration which can be achieved with this snippet:

[acme]
email = "test@traefik.io"
storage = "acme.json"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
# [[acme.domains]]
# main = "local1.com"
# sans = ["test1.local1.com", "test2.local1.com"]
# [[acme.domains]]
# main = "local2.com"
# [[acme.domains]]
# main = "*.local3.com"
# sans = ["local3.com", "test1.test1.local3.com"]

Træfik will create the certificates automatically. Of course, you have a lot of conveniences here too, like wildcard certificates with DNS verification though different DNS API providers and stuff like that.

To conclude that above statements, it exists a thing which can meet the demands for a simple, apified reverse proxy we need in a dockerized world. Just give it a try and see how easy microservices – especially TLS-encrypted – can be.

Marius Hein

Autor: Marius Hein

Marius Hein ist schon seit 2003 bei NETWAYS. Er hat hier seine Ausbildung zum Fachinformatiker absolviert, dann als Application Developer gearbeitet und ist nun Leiter der Softwareentwicklung. Ausserdem ist er Mitglied im Icinga Team und verantwortet dort das Icinga Web.

OSDC 2018 Recap: The Archive is online

 

Open Source Data Conference| JUNE 12 – 13, 2018 | BERLIN

Hello Open Source Lovers,

as we know all of you are most of the time focused on the things to come: An upcoming release or a future-oriented project you are working on. But as Plato said, “twice and thrice over, as they say, good is it to repeat and review what is good.”

OSDC Archive is online, giving us the occasion to do so.

OSDC 2018 was a blast! Austria, Belgium, Germany, Great Britain, Italy, Netherlands, Sweden, Switzerland, Spain, Czech Republic and the USA: 130 participants from all over the world came to Berlin to exchange thoughts and discuss the future of open source data center solutions.

„Extending Terraform“, „Monitoring Kubernetes at scale“, „Puppet on the Road to Pervasive Automation“ and „Migrating to the Cloud“ were just a few of many, trailblazing topics: We are overwhelmed by the content and quality of speakers, who presented a really interesting range of technologies and use cases, experience reports and different approaches. No matter what topic, all of the 27 high-level speakers had one goal: Explain why and how and share their lessons learned. And by doing so they all contributed to Simplifying Complex Infrastructures with Open Source.

Besides the lectures the attendees joined in thrilling discussions and a phenomenal evening event with a mesmerizing view over the city of Berlin.

A big thank you to our speakers and sponsors and to all participants, who made the event special. We are already excited about next year’s event – OSDC 2019, on May 14 to 15 in Berlin. Save the date!

And now, take a moment, get yourself a cup of coffee, lean back and recap the 2018 conference. Have a look at videos and slides and photos.

Julia Hornung

Autor: Julia Hornung

Julia ist seit Juni 2018 Mitglied der NETWAYS-Crew. Vor ihrer Zeit in unserem Marketing Team hat sie als Journalistin und Produktionsassistentin in der freien Theaterszene gearbeitet. Ihre Leidenschaft gilt gutem Storytelling. Privat widmet sie sich dem Klettern und ihrer Ausbildung zur Yogalehrerin.