Puppet 5 – how new is that!

This entry is part 1 of 1 in the series Puppet 5


Have you heard? In July this year Puppet released its’ new software stack of the agent, server and db package to manage, control and orchestrate your IT infrastructure. Enough time has passed for us to test, upgrade and implement it in our environment. With this series we want to introduce some of the new features and inform you about the tasks you need to know or be aware of should you wish to upgrade.

Let us start with the basic changes in the new version. These are for example the switch from PSON to JSON encoding if the agent downloads his information, catalogues or metadata. This will help you to better integrate Puppet in your setup when it needs to communicate with other tools. You can also see an improved performance in handling facts and reports, especially when it matters to process larger quantities.

Another point to mention is the usage of the newer Ruby in version 2.4. Since version 4, Puppet ships all it needs in its’ packages, so you need to reinstall your manually added Puppet agent gems.

And the last point for today should be the version numbering of the new packages. Puppet focuses on keeping all major versions of agent, server and db in one counter and follows the Semantic Versioning as closely as possible. Which is one of the reason for the huge jump in the Puppet server version.

Stay tuned to learn more. And if you need further help or want to learn more about Puppet take a look at our products or training course .

Ronny Biering

Autor: Ronny Biering

Vor NETWAYS arbeitete Ronny bei einem der großen deutschen Internet- und Hosting Provider. Hier betreut er zusammen mit seinen Kollegen aus dem Bereich Managed Services die Plattformen unserer Kunden. Im Gegensatz zu dem üblichen Admin-Cliche, gehört Fitness zu einer seiner Lieblingsfreizeitbeschäftigung.

Benchmarking Graphite

Before using Graphite in production, you should be aware of how much load it can handle. There is nothing worse than finding out your carefully planned and build setup is not good enough. If you are already using Graphite, you might want to know the facts of your current setup. Knowing the limits of a setup helps you to react on future requirements. Benchmarking is essential if you really want to know how many metrics you can send and how many requests you can make. To find this out, there are some tools out there that help you to run benchmarks.

Requirements

The most important thing you need when starting to benchmark Graphite is time. A typical Graphite stack has 3 – 4 different components. You need plenty of time to properly find out how much load each of them can handle and even more to decide which settings you may want to tweak.

Haggar

Haggar is a tool to simulate plenty of agents that send generated metrics to the receiving endpoint of a Graphite stack (carbon-cache, carbon-relay, go-carbon, etc…). The amount of agents and metrics is configurable, as well as the intervals. Even though Haggar does not consume much resources, you should start it on a separate machine.

Haggar is written in go and installed with the go get command. Ensure that Go (>= v1.8) is installed and working before installing Haggar.

Installing Haggar

go get github.com/gorsuch/haggar

You’ll find the binary in your GOPATH that you have set during the installation of Go previously.

Running Haggar:

$GOPATH/bin/haggar -agents=100 -carbon=graphite-server.example.com:2003 -flush-interval=10s -metrics=1000

Each agent will send 1000 metrics every 10 seconds to graphite-server.example.com on port 2003. The metrics are prefixed with haggar. by default. The more agents and metrics you send, the more write operations your Graphite server will perform.

Example output of Haggar:

root@graphite-server:/opt/graphite# /root/go/bin/haggar -agents=100 -carbon=graphite-server.example.com:2003 -flush-interval=10s -metrics=1000
2017/09/21 09:33:30 master: pid 16253
2017/09/21 09:33:30 agent 0: launched
2017/09/21 09:33:42 agent 1: launched
2017/09/21 09:33:46 agent 2: launched
2017/09/21 09:34:00 agent 0: flushed 1000 metrics
2017/09/21 09:34:02 agent 1: flushed 1000 metrics
2017/09/21 09:34:06 agent 2: flushed 1000 metrics

Testing the write performance is a good starting point, but it’s not the whole truth. In a production environment data is not only written but also read. For example by users staring at dashboards all day long. So reading the data is as much important as writing it because it also produces load on a server. This is where JMeter comes into play.

JMeter

Apache JMeter usually is used to test the performance of web applications. It has many options to simulate requests against a web server. JMeter can also be used to simulate requests against Graphite-Web. You should do this simultaneously while Haggar is sending data, so you have the “complete” simulation.

The easiest way to configure JMeter is the graphical interface. Running test plans is recommended on the command line, though. Here’s an example how I set up JMeter to run requests against Graphite-Web:

  • Add a Thread Group to the Test Plan
    • Set the Number of Threads (eg. 5)
    • Set the loop count to Forever
  • Add a Random Variable to the Thread Group
    • Name the variable metric
    • Set the minimum to 1
    • Set the maximum to 1000
    • Set `Per Thread` to true
  • Add another Random Variable to the Thread Group
    • Name the variable agent
    • Set the minimum to 1
    • Set the maximum to 100
    • Set Per Thread to true

Haggar uses numbers to name it’s metrics. With these variables we can create dynamic requests.

  • Add a HTTP Request Defaults to the Thread Group
    • Set the server name or IP where your Graphite-Web is running (eg. graphite-server.example.com)
    • Add the path /render to access the rendering API of Graphite-Web
    • Add some parameters to the URL, examples:
      • width: 586
      • height: 308
      • from: -30min
      • format: png
      • target: haggar.agent.${agent}.metrics.${metric}

The most important part about the request defaults is the target parameter.

  • Add a HTTP Request to the Thread Group
    • Set the request method to GET
    • Set the request path to /render
  • Add a View Results in Table to the Thread Group

The results table shows details of each request. On the bottom there is an overview of the count of all samples, the average time and deviation. Optionally you can also add a Constant Throughput Timer to the Thread Group to limit the requests per minute.

If everything is working fine, you can now start the test plan and it should fire lots of requests against your Graphite-Web. For verification, you should also look into the access logs, just to make sure.

At this point your Graphite server is being hit by Haggar and JMeter at the same time. Change the settings to find out at which point your server goes down.

Interpretation

Obviously, killing your Graphite server is not the point of running benchmarks. What you actually want to know is, how each component behaves with certain amounts of load. The good thing is, every Graphite component writes metrics about itself. This way you get insights about how many queries are running, how the cache is behaving and many more.

I usually create separate dashboards to get the information. For general information, I use collectd to monitor the following data:

  • Load, CPU, Processes, I/O Bytes, Disk Time, I/O Operations, Pending I/O Operations, Memory

The other dashboards depend on the components I am using. For carbon-cache the following metrics are very interesting:

  • Metrics Received, Cache Queues, Cache Size, Update Operations, Points per Update, Average Update Time, Queries, Creates, Dropped Creates, CPU Usage, Memory Usage

For carbon-relay you need to monitor at least the following graphs:

  • Metrics Received, Metrics Send, Max Queue Length, Attempted Relays, CPU Usage, Memory Usage

All other Graphite alternatives like go-carbon or carbon-c-relay also write metrics about themselves. If you are using them instead of the default Graphite stack, you can create dashboards for them as well.

Observing the behaviour during a benchmark is the most crucial part of it. It is important to let the benchmark do its thing for a while before starting to draw conclusions. Most of the tests will peak at the start and then calm after a while. That’s why you need a lot of time when benchmarking Graphite, every test you make will take its own time.

Performance Tweaks

Most setups can be tuned to handle more metrics than usual. This performance gain usually comes with a loss of data integrity. To increase the number of handled metrics per minute the amount of I/Ops must be reduced.

This can be done by forcing the writer (eg. carbon-cache) to keep more metrics in the memory and write many data points per whisper update operation. With the default carbon-cache this can be achieved by setting MAX_UPDATES_PER_SECOND to a lower value than the possible I/Ops of the server. Another approach is to let the kernel handle caching and allow it to combine write operations. The following settings define the behaviour of the kernel regarding dirty memory ratio.

  • vm.dirty_ratio (eg. 80)
  • vm.dirty_background_ratio (eg. 50)
  • vm.diry_expire_centisecs (eg. 60000)

Increasing the default values will lead to more data points in the memory and multiple data points per write operation.

The downside of these techniques is that data will be lost on hardware failure. Also, stopping or restarting the daemon(s) will take much longer, since all the data needs to be flushed to disk first.

Blerim Sheqa

Autor: Blerim Sheqa

Blerim ist seit 2013 bei NETWAYS und seitdem schon viel in der Firma rum gekommen. Neben dem Support und diversen internen Projekten hat er auch im Team Infrastruktur tatkräftig mitgewirkt. Hin und wieder lässt er sich auch den ein oder anderen Consulting Termin nicht entgehen. Mittlerweile kümmert sich Blerim hauptsächlich im Icinga Umfeld um die technischen Partner und deren Integrationen in Verbindung mit Icinga 2.

Monthly Snap August > GitHub, Icinga 2, OSBConf, OSMC, Mac, Cloud Management, SSH, Braintower, Foreman

In August, Johannes Meyer began with an overview of some future Github topics and Lennart continued with part 6 of Icinga 2 Best Practice.
Julia said thank you to our OSBConf Sponsors, while Georg wrote not one, not two, but three articles on simplifying SSL certificates!
Then Alex shared his happiness to be a Mac user now and Dirk reviewed the Foreman Birthday Bash!
On events, Julia announced the program for the Open Source Monitoring Conference, continued with the second reason to join the OSBConf and started the OSMC blog series “Stay tuned!”.
Later in August, Isabel presented the ConiuGo Modem and showed the advantages of the Braintower SMS Gateway.
Towards the end of August, Enrico started a blog series on Cloud Management and Gunnar told us about SSH authentication with GnuPG and smart cards.

Julia Hackbarth

Autor: Julia Hackbarth

Julia hat 2017 ihre Ausbildung zum Office Manager bei NETWAYS absolviert und währenddessen das Events&Marketing Team kennen und lieben gelernt. Besondere Freude bereiten ihr bei NETWAYS die tolle Teamarbeit und vielseitige Herausforderungen. Privat nutzt Julia ihre freie Zeit, um so oft wie möglich über's Handballfeld zu flitzen und sich auszutoben. In ihrer neuen Rolle als Marketing Managerin freut sie sich auf spannende Aufgaben und hofft, dass ihr die Ideen für kreative Texte nicht so schnell ausgehen.

SSH authentication with GnuPG and smart cards

Most system administrators know how to use key-based authentication with SSH. Some of the more obvious benefits include agent forwarding (i.e. being able to use your SSH key on a remote system) and not having to remember passwords. There are, however, a few issues with having your SSH key on a general-purpose computer: Malware can obtain an unencrypted copy of your private SSH key fairly easily. Also, while migrating your key to another system is fairly easy it’s virtually impossible to securely use your SSH key on another untrusted system (e.g. at a customer).

This is where smart cards come in. A smart card stores certificates (such as your SSH key) and provides functionality for operating on those certificates (e.g. using their private key to sign or decrypt data). Smart cards come in various form factors: credit cards, SIM cards, etc. – which commonly require a separate card reader in order to be usable. However, there are also USB devices which implement all the usual smart card features in addition to other security features (e.g. requiring the user to press a key on the device before an authentication request is signed).

One such device is the Yubikey 4 which I’m personally using for SSH authentication.

The first step towards using a new Yubikey for SSH authentication is enabling the OpenPGP applet on it:

$ ykpersonalize -m82

I already had a PGP key, however in order to use it for authentication I had to create an additional subkey for the key usage type “authentication”. Here’s how that can be done:

$ gpg --edit-key --expert info@example.org
gpg (GnuPG) 2.1.23; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec rsa2048/42330DF1CA650A40
created: 2017-08-24 expires: never usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56D8D1BBE7E720DB
created: 2017-08-24 expires: never usage: E
[ultimate] (1). NETWAYS Blog <info@example.org>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 8

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Sign Encrypt

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? s

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Encrypt

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? e

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions:

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? a

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? q
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa2048/42330DF1CA650A40
created: 2017-08-24 expires: never usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56D8D1BBE7E720DB
created: 2017-08-24 expires: never usage: E
ssb rsa2048/5F43E49ED794BDEF
created: 2017-08-24 expires: never usage: A
[ultimate] (1). NETWAYS Blog <info@example.org>

gpg> save

Now that we’ve created a new subkey we can move its private key part to the smart card:

$ gpg --edit-key --expert info@example.org
gpg (GnuPG) 2.1.23; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec rsa2048/42330DF1CA650A40
created: 2017-08-24 expires: never usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56D8D1BBE7E720DB
created: 2017-08-24 expires: never usage: E
ssb rsa2048/5F43E49ED794BDEF
created: 2017-08-24 expires: never usage: A
[ultimate] (1). NETWAYS Blog <info@example.org>

gpg> toggle

sec rsa2048/42330DF1CA650A40
created: 2017-08-24 expires: never usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56D8D1BBE7E720DB
created: 2017-08-24 expires: never usage: E
ssb rsa2048/5F43E49ED794BDEF
created: 2017-08-24 expires: never usage: A
[ultimate] (1). NETWAYS Blog <info@example.org>

gpg> key 2

sec rsa2048/42330DF1CA650A40
created: 2017-08-24 expires: never usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56D8D1BBE7E720DB
created: 2017-08-24 expires: never usage: E
ssb* rsa2048/5F43E49ED794BDEF
created: 2017-08-24 expires: never usage: A
[ultimate] (1). NETWAYS Blog <info@example.org>

gpg> keytocard
Please select where to store the key:
(3) Authentication key
Your selection? 3
gpg> quit
Save changes? (y/N) y

The Yubikey 4 has three key slots which can be used for storing RSA keys with up to 4096 bits each. This might be an excellent opportunity to also move your signing and encryption key to your smart card – assuming you have an encrypted backup somewhere in case you lose access to your Yubikey.

The last step involves replacing ssh-agent with gpg-agent. This allows your SSH client to use your PGP certificates (including the authentication subkey we just created). In addition to that gpg-agent also supports regular SSH keys which might be useful if you have more than one SSH key and only plan to migrate one of them to your Yubikey:

I had to add the following snippet to my .profile file to start gpg-agent instead of ssh-agent:

[ -f ~/.gpg-agent-info ] && source ~/.gpg-agent-info
if [ -S "${GPG_AGENT_INFO%%:*}" ]; then
  export GPG_AGENT_INFO
  export SSH_AUTH_SOCK
  export SSH_AGENT_PID
else
  eval $(gpg-agent --daemon --write-env-file ~/.gpg-agent-info)
fi

And here’s OpenSSH prompting me for my smart card and PIN:

And that’s how you can literally put your PGP key on your keychain. 🙂

Gunnar Beutner

Autor: Gunnar Beutner

Vor seinem Eintritt bei NETWAYS arbeitete Gunnar bei einem großen deutschen Hostingprovider, wo er bereits viel Erfahrung in der Softwareentwicklung für das Servermanagement sammeln konnte. Bei uns kümmert er sich vor allem um verschiedene Kundenprojekte, aber auch eigene Tools wie inGraph oder Icinga2.

A naturally grown .vimrc

Vim is pretty great, honestly. But since I started using vim after growing tired of nano, a lot in my .vimrc changed… To the point where colleagues who use vim on their machine rather use nano on mine before trying to wrap their head around my workflow. Every .vimrc is different and a lot of them exist out there, over twelve thousand repositories of shared vim configurations on GitHub alone and many more on private laptops and computers.

Generally creating a .vimrc works like creating a Makefile: Copy and paste from different sources with small changes until you have an approximation of the desired result and then need to decide whether you want to go the extra mile of dealing with your configurations’ and Vims’ quirks to get it working properly or you just leave it be. This is followed by years of incremental tweaking until you have a Vim environment which works perfectly but you don’t know why. At least that’s how I do it ¯\_(ツ)_/¯

So I took a dive into my own to see what kind of madness lurks down there…

set nocompatible
set t_Co=16
set shiftwidth=4
set tabstop=4
set autoindent
set smartindent
filetype plugin on
filetype indent on

So far, so good. Nothing special. Other than that indentation is set and overwritten three times, this has not lead to any problems (yet). The next lines are a bit more interesting:

call pathogen#infect()
call pathogen#helptags()
syntax on
set background=dark " dark | light "
colorscheme solarized

 

pathogen is a runtime manipulator which allows you to add additional plugins to Vim easier, in my case these are vim-solarized, a popular colourscheme, and vim-fugitive, a plugin that adds git commands within Vim. #infect() loads these plugins and #helptags() generates documentation. The following lines make use of solarized and add syntax highlighting.

set nu
set listchars=eol:$,tab:>-,trail:.,extends:>,precedes:<,nbsp:_
set list
let &colorcolumn="121"
set splitbelow
set splitright
set viminfo='20,<1000

This controls numbering and control characters, colorcolumn adds a ugly line over the 121’st char to keep up with coding styles. split tells vim where to add new views when splitting (I rarely use these) and viminfo sets the copy buffer to up to 1000 lines.
Now we get to the really interesting bits, key remaps!

Artist rendition of the authors .vimrc

"See :help map-commands
"n        = Normal only
"v        = Visual only
"i        = Insert only
"         = Normal+Insert+Select only
" nore    = disable recusiveness
"     map = Recursive map

"split window switching
nnoremap <C-J> <C-W><C-J>
nnoremap <C-K> <C-W><C-K>
nnoremap <C-L> <C-W><C-L>
nnoremap <C-H> <C-W><C-H>

"Make Up and Down Arrows move half a screen
"instead of 1 line
noremap <Up> <C-U>
noremap <Down> <C-D>

"BUFFERS Used to be F6,F7. Now used by flake
set hidden
noremap :bprevious
noremap :bnext

"Because buffers change Flake to use F3 instead of F7
"autocmd FileType python map :call Flake8()
" DOES NOT WORK DAMN

"Smart Home key
noremap <expr> <Home> (col('.') == matchend(getline('.'), '^\s*')+1 ? '0' : '^')
imap <Home> <C-o><Home>

Switching windows can be hassle without the first four remaps, but again, I rarely use split windows. Mapping the up and down arrows to jump half a screen instead just one line I added to stop myself from using them instead of j and k, I have grown quite used to it, so it gets to stay ^_^
Buffer remaps are a must! Because I do use them a lot. There have been problems with Flake, a python lint tool, which I tried to avoid by remapping flakes key… Didn’t work out, so I got rid of Flake and call pylint when required instead.

The smart home key makes the <home>-key jump to the first non-whitespace char in a line instead of the begining of the line, quite handy!

"Access commandwindow with ctrl+f or :cedit

"For dorks
map q: :q
command Wsudo w !sudo tee % > /dev/null
"Jump to next line > 120 chars
command Warnl :/\%>120v./+
"For easy copy-paste
command Cpm :set nonu paste cc=  nolist
"Catch regular caps failure
command WQ :wq

These are just a few quality of life improvements, I tend to mistype :q as q: and :wq as :WQ. The :Wsudo command lets me edit read-only files and :Cpm is for easy mouse copy and paste. :Warnl jumps to the next line with more than 120 chars, this again is to check for style problems.

Alright, that’s it. My current .vimrc. There were a few commented out lines I omitted because I have no clue what I was thinking at the time anymore, but I hope there were a few little bits someone else might find useful to feed their own beast of a .vimrc with.

Image sources:
Vim logo from vim.sexy
Shoggoth by twitter.com/nottsuo

Jean-Marcel Flach

Autor: Jean-Marcel Flach

Geboren und aufgewachsen in Bamberg, kam Jean (das "-Marcel" ist still) nach einem Ausflug an die Uni, als Azubi zu NETWAYS. Dort sitzt er seit 2014 im Icinga 2 core Entwicklungsteam.

Open Source Monitoring Conference 2017 – Stay tuned!

OSMC 2017 | The worldwide leading conference on open source monitoring solutions
November 21 to 24 | Nuremberg

In 10 parts of this series, we will give you some valuable information about the conference as well as some impressions of the last years.

What is the OSMC about?
The bilingual conference focuses on the latest trends and developments on open source monitoring solutions and offers a unique opportunity to meet the open source community
and to gain valuable expertise. The thematic focus is broadly diversified and equally suited for beginners and advanced users.

What makes the OSMC unique?
The conference offers Interesting talks with a real benefit for the attendees, as well as hands-on workshops to expand your knowledge and monitoring know- how.
Due to the great success in the past two years, we are again offering a hackathon, which is bookable as an add-on.

We’re very excited and would be happy to welcome you to Nuremberg in November!

And now, have a look at this high class talk of the Icinga Team!

 

Julia Hackbarth

Autor: Julia Hackbarth

Julia hat 2017 ihre Ausbildung zum Office Manager bei NETWAYS absolviert und währenddessen das Events&Marketing Team kennen und lieben gelernt. Besondere Freude bereiten ihr bei NETWAYS die tolle Teamarbeit und vielseitige Herausforderungen. Privat nutzt Julia ihre freie Zeit, um so oft wie möglich über's Handballfeld zu flitzen und sich auszutoben. In ihrer neuen Rolle als Marketing Managerin freut sie sich auf spannende Aufgaben und hofft, dass ihr die Ideen für kreative Texte nicht so schnell ausgehen.