Request Tracker 4.4 Security Update and UTF8 issues with Perl’s DBD::Mysql 4.042

Last week, Best Practical announced that there are critical security fixes available for Request Tracker. We’ve therefore immediately pulled the patches from 4.4.2rc2 into our test stages and rolled them out in production.

 

Update == Broken Umlauts

One thing we did notice too late: German umlauts were broken on new ticket creation. Text was simply cut off and rendered subjects and ticket body fairly unreadable.

Encoding issues are not nice, and hard to track down. We rolled back the security fix upgrade, and hoped it would simply fix the issue. It did not, even our production version 4.4.1 now led into this error.

Our first idea was that our Docker image build somehow changes the locale, but that would have at least rendered “strange” text, not entirely cut off. Same goes for the Apache webserver encoding. We’ve then started comparing the database schema, but was not touched in these regards too.

 

DBD::Mysql UTF8 Encoding Changes

During our research we learned that there is a patch available which avoids Perl’s DBD::mysql in version 4.042. The description says something about changed behaviour with utf8 encoding. Moving from RT to DBD::Mysql’s Changelog there is a clear indication that they’ve fixed a long standing bug with utf8 encoding, but that probably renders all other workarounds in RT and other applications unusable.

2016-12-12 Patrick Galbraith, Michiel Beijen, DBI/DBD community (4.041_1)
* Unicode fixes: when using mysql_enable_utf8 or mysql_enable_utf8mb4,
previous versions of DBD::mysql did not properly encode input statements
to UTF-8 and retrieved columns were always UTF-8 decoded regardless of the
column charset.
Fix by Pali Rohár.
Reported and feedback on fix by Marc Lehmann
(https://rt.cpan.org/Public/Bug/Display.html?id=87428)
Also, the UTF-8 flag was not set for decoded data:
(https://rt.cpan.org/Public/Bug/Display.html?id=53130)

 

Solution

Our build system for the RT container pulls in all required Perl dependencies by a cpanfile configuration. Instead of always pulling the latest version for DBD::Mysql, we’ve now pinned it to the last known working version 4.0.41.

 # MySQL
-requires 'DBD::mysql', '2.1018';
+# Avoid bug with utf8 encoding: https://issues.bestpractical.com/Ticket/Display.html?id=32670
+requires 'DBD::mysql', '== 4.041';

Voilá, NETWAYS and Icinga RT production instances fixed.

 

Conclusion

RT 4.4.2 will fix that by explicitly avoiding the DBD::Mysql version in its dependency checks, but older installations may suffer from that problem in local source builds. Keep that in mind when updating your production instances. Hopefully a proper fix can be found to allow a smooth upgrade to newer library dependencies.

If you need assistance with building your own RT setup, or having trouble fixing this exact issue, just contact us 🙂

Michael Friedrich

Autor: Michael Friedrich

Michael ist seit vielen Jahren Icinga Developer und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht im Monitoring-Portal helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst das schöne Nürnberg. Oder - at an Icinga Camp near you 😉

Graphite Reporting mit der Render-API

Vor kurzem durfte ich mich damit beschäftigen, wie man schnell und halbwegs vernünftig eine Art Reporting für Performance-Daten für Icinga 2 umsetzen kann. Um solche Daten entsprechend aufzubereiten bedarf es weiterer Open-Source-Software. Meiner Meinung nach eignet sich Graphite hierzu am besten. Ich möchte heute einen kurzen Einblick in die Möglichkeiten geben.

Graphite ist ein Werkzeug, welches aus drei Komponenten besteht: Carbon-Cache als Datensammler, Whisper als Storage-Backend und Graphite-Web um die Graphen visuell und als API bereitzustellen. Icinga 2 schreibt die Performance-Daten mit dem “graphite”-Feature direkt an den Carbon-Cache-TCP-Socket. Welche Möglichkeiten habe ich als Anwender nun, diese historischen Daten für meine Zwecke in Reporting zu verwenden?

Eigentlich ganz einfach: Graphite liefert eine eigene Render-API. Diese bietet die Möglichkeit, die Daten als Graph, csv , json, pdf und einige weitere Formate zu generieren. Der Anwender kann diese Daten als REST-API Aufruf im Browser oder beispielsweise curl in der Shell abholen. Dabei können verschieden Metriken und Aggregationen auf Graphen angewandt und unterschiedliche Werte gebündelt abgerufen werden.

 

Wie fange ich an?

Generell wird die URL wie folgt aufgebaut:

http://grtaphite.ip/render?target=icinga2..services.*.*.perfdata.*.value

Im gezeigten Beispiel kann man für die “target”-Metriken auch Wildcards verwenden, um etwa mehrere Services gleichzeitig abzufragen. Dies entspricht dem Dateipfad im Whisper-Backend. Man kann allerdings auch Listen oder Arrays angeben, um gezielt bestimmte Werte abzufragen.

Es gibt unterschiedliche Möglichkeiten, die Daten für die entsprechenden Ausgaben weiterzuverarbeiten. Die wohl wichtigsten sind &from für die Zeit und &format für die Formatierung der Daten.

 

Metrik “Baum”

Um die Werte für die CPU-Load abzufragen, muss der entsprechende Service in Icinga 2 definiert sein und Performance-Daten nach Graphite schreiben. Im Screenshot sieht man im Baum “services” – “load”, letzterer stellt den Service-Namen dar. Der Sub-Knoten “load” liefert die Information, dass hier das “load” CheckCommand verwendet wurde. Tip an dieser Stelle: Lässt sich etwa für Dashboard-Templates als eindeutiger Schlüssel verwenden in Grafana.

Performance-Daten liefern zum einen “perfdata”, worin einzelne Metriken mit ihrem Namen abgelegt werden, etwa “load1” und darunter “value” als Wert und Thresholds, etwa “crit” und “warn”, sofern definiert. Zusätzlich können auch Metadaten von Icinga 2 geschrieben werden, die man aber explizit in der Konfiguration einschalten muss (“metadata”).

 

Einzelne Metrik abfragen

Im folgenden interessiert uns aber lediglich der Wert der Metrik “load1” als einzelner Wert. Um eine ordentliche Ausgangsbasis zu erhalten, wählen wir den Zeitraum der letzten 3 Stunden. Tip: Es sind relative und absolute Zeitangaben möglich.

http://192.168.100.101/render?target=icinga2.icinga3op_foreman_local.services.load.load.perfdata.load1.value&height=800&width=600&from=-3hours

 

Mehrere Metriken zusammenfassen

Man kann diese Abfrage auch erweitern, indem man mehrere Werte gleichzeitig abfrägt:

http://192.168.100.101/render?target=icinga2.icinga3op_foreman_local.services.load.load.perfdata.{load1,load15,load5}.value&height=800&width=600&from=-3hours

 

Mehrere Hosts mit der gleichen Metrik abfragen

Eine Abfrage mit verschiedenen Servern für “load15” als Metrik könnte so aussehen:

http://192.168.100.101/render?target=icinga2.icinga*op_foreman_local.services.load.load.perfdata.load15.value&height=800&width=600&from=-3hours

Formatierung

Die Darstellung der Daten setzt immer eine &height und eine &width als Parameter voraus. Es ist auch möglich hier Tortendiagramme mit verschiedenen Funktionen für Mittelwerte und Summen aufzurufen.

Man kann die erhaltenen Daten auch als CSV-Werte formatieren und dann beispielsweise mit curl abspeichern. Alternativ kann man sich diese Werte auch direkt im Browser anzeigen lassen.

[root@icinga1op ~]# curl 'http://192.168.100.101/render/?target=icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value&from=-3hours&format=csv'

icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:30:00,0.24
icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:31:00,0.21
icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:32:00,0.17
icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:33:00,0.15
icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:34:00,0.16
icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:35:00,0.13
icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value,2017-06-16 06:36:00,0.12
....

Wie bereits erwähnt lassen sich die Daten auch im JSON-Format anzeigen. Dies kann wieder über den Browser oder mittels curl erfolgen.

Das folgende Beispiel rendert alle erhaltenen Datenpunkte als JSON-Format:

[root@icinga1op ~]# curl 'http://192.168.100.101/render/?target=icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value&from=-3hours&format=json'
[{"target": "icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value", "datapoints": [[0.13, 1497595140], [0.11, 1497595200], [0.15, 1497595260], [0.13, 1497595320], [0.1, 1497595380], [0.08, 1497595440], [0.07, 1497595500], [0.06, 1497595560], [0.06, 1497595620], [0.05, 1497595680], [0.04, 1497595740], [0.07, 1497595800], [0.05, 1497595860], [0.04, 1497595920], [0.05, 1497595980], [0.04, 1497596040], [0.08, 1497596100], [0.06, 1497596160], [0.07, 1497596220], [0.06, 1497596280], [0.06, 1497596340], [0.05, 1497596400], [0.04, 1497596460], [0.04, 1497596520], [0.03, 1497596580], [0.05, 1497596640], [0.04, 1497596700], [0.05, 1497596760], [0.04, 1497596820], [0.09, 1497596880], [0.07, 1497596940], [0.06, 1497597000], [0.06, 1497597060], [0.05, 1497597120], [0.04, 1497597180], [0.04, 1497597240], [0.03, 1497597300], [0.02, 1497597360], [0.02, 1497597420], [0.01, 1497597480], [0.01, 1497597540], [0.01, 1497597600], [0.01, 1497597660], [0.01, 1497597720], [0.01, 1497597780], [0.07, 1497597840], [0.06, 1497597900], [0.07, 1497597960], [0.08, 1497598020], [0.07, 1497598080], [0.06, 1497598140], [0.04, 1497598200], [0.04, 1497598260], [0.03, 1497598320], [0.14, 1497598380], [0.14, 1497598440], [0.13, 1497598500], [0.12, 1497598560], [0.1, 1497598620], [0.09, 1497598680], [0.09, 1497598740], [0.07, 1497598800], [0.06, 1497598860], [0.07, 1497598920], [0.1, 1497598980], [0.1, 1497599040], [0.08, 1497599100], [0.06, 1497599160], [0.05, 1497599220], [0.04, 1497599280], [0.04, 1497599340], [0.03, 1497599400], [0.02, 1497599460], [0.05, 1497599520], [0.04, 1497599580], [0.03, 1497599640], [0.03, 1497599700], [0.04, 1497599760], [0.05, 1497599820], [0.04, 1497599880], [0.04, 1497599940], [0.04, 1497600000], [0.04, 1497600060], [0.03, 1497600120], [0.03, 1497600180], [0.02, 1497600240], [0.01, 1497600300], [0.01, 1497600360], [0.01, 1497600420], [0.04, 1497600480], [0.04, 1497600540], [0.03, 1497600600], [0.03, 1497600660], [0.02, 1497600720], [0.04, 1497600780], [0.04, 1497600840], [0.03, 1497600900], [0.03, 1497600960], [0.02, 1497601020], [0.01, 1497601080], [0.01, 1497601140], [0.01, 1497601200], [0.01, 1497601260], [0.01, 1497601320], [0.01, 1497601380], [0.01, 1497601440], [0.01, 1497601500], [0.01, 1497601560], [0.01, 1497601620], [0.63, 1497601680], [1.29, 1497601740], [1.87, 1497601800], [2.4, 1497601860], [2.74, 1497601920], [2.36, 1497601980], [1.93, 1497602040], [1.58, 1497602100], [1.29, 1497602160], [1.7, 1497602220], [2.26, 1497602280], [2.77, 1497602340], [3.1, 1497602400], [3.37, 1497602460], [3.09, 1497602520], [2.56, 1497602580], [2.09, 1497602640], [1.71, 1497602700], [1.4, 1497602760], [1.16, 1497602820], [0.95, 1497602880], [0.78, 1497602940], [0.63, 1497603000], [0.52, 1497603060], [0.42, 1497603120], [0.35, 1497603180], [0.28, 1497603240], [0.23, 1497603300], [0.19, 1497603360], [0.16, 1497603420], [0.13, 1497603480], [0.12, 1497603540], [0.1, 1497603600], [0.08, 1497603660], [0.07, 1497603720], [0.05, 1497603780], [0.07, 1497603840], [0.06, 1497603900], [0.05, 1497603960], [0.04, 1497604020], [0.03, 1497604080], [0.03, 1497604140], [0.02, 1497604200], [0.02, 1497604260], [0.01, 1497604320], [0.01, 1497604380], [0.01, 1497604440], [0.01, 1497604500], [0.01, 1497604560], [0.01, 1497604620], [0.01, 1497604680], [0.01, 1497604740], [0.01, 1497604800], [0.01, 1497604860], [0.04, 1497604920], [0.05, 1497604980], [0.06, 1497605040], [0.08, 1497605100], [0.06, 1497605160], [0.07, 1497605220], [0.06, 1497605280], [0.04, 1497605340], [0.05, 1497605400], [0.04, 1497605460], [0.03, 1497605520], [0.03, 1497605580], [0.02, 1497605640], [0.02, 1497605700], [0.01, 1497605760], [0.01, 149760

Um die Daten leserlich in der Konsole aufzubereiten, empfiehlt es sich “python -m json.tool” als Formatierungshilfe zu verwenden.

curl 'http://192.168.100.101/render/?target=icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value&from=-3hours&format=json' | python -m json.tool

Um die JSON-Daten in der Console zu visualisieren kann man Zach Holman’s Spark verwenden:

[root@graphite ~]# curl 'http://192.168.100.101/render/?target=icinga2.icinga3op_foreman_local.services.load.load.perfdata.load5.value&from=-3hours&format=json' | python -mjson.tool | grep ',' | grep -v '\]' | spark
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3695 0 3695 0 0 230k 0 --:--:-- --:--:-- --:--:-- 240k
▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▃▅▅▅▃▃▃▃▅▅███▅▅▃▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁

 

Conclusio

Das war nur ein kleiner und rudimentärer Überblick was über die Render-API möglich ist. Jedoch bietet sie genügend Anregung um mehr mit Graphite, Icinga 2 und Metriken zu machen. Die Render-APi bietet zudem die Möglichkeit, programmatisch in Scripts darauf zuzugreifen. Mir hat es zumindest Lust auf mehr gemacht und ich denke, dass ich noch weitere Blogposts zu diesem Thema schreiben werde 🙂

Falls ihr nicht warten könnt, hier nochmal der Link zur Doku. Oder ihr schaut einfach mal bei uns in der Graphite-Schulung vorbei und lernt die Render-API am praktischen Beispiel kennen.

Daniel Neuberger

Autor: Daniel Neuberger

Nach seiner Ausbildung zum Fachinformatiker für Systemintegration und Tätigkeit als Systemadministrator kam er 2012 zum Consulting. Nach nun mehr als 4 Jahren Linux und Open Source Backup Consulting zieht es ihn in die Welt des Monitorings und System Management. Seit April 2017 verstärkt er das Netways Professional Services Team im Consulting rund um die Themen Elastic, Icinga und Bareos. Wenn er gerade mal nicht um anderen zu Helfen durch die Welt tingelt geht er seiner Leidenschaft für die Natur beim Biken und der Imkerei nach und kassiert dabei schon mal einen Stich.

Documentation matters or why OSMC made me write NSClient++ API docs

Last year I learned about the new HTTP API in NSClient++. I stumbled over it in a blog post with some details, but there were no docs. Instant thought “a software without docs is crap, throw it away”. I am a developer myself, and I know how hard it is to put your code and features into the right shape. And I am all for Open Source, so why not read the source code and reverse engineer the features.

I’ve found out many things, and decided to write a blog post about it. I just had no time, and the old NSClient++ documentation was written in ASCIIDoc. A format which isn’t easy to write, and requires a whole lot knowledge to format and generate readable previews. I skipped it, but I eagerly wanted to have API documentation. Not only for me when I want to look into NSClient++ API queries, but also for anyone out there.

 

Join the conversation

I met Michael at OSMC 2016 again, and we somehow managed to talk about NSClient++. “The development environment is tricky to setup”, I told him, “and I struggle with writing documentation patches in asciidoc.”. We especially talked about the API parts of the documentation I wanted to contribute.

So we made a deal: If I would write documentation for the NSClient++ API, Michael will go for Markdown as documentation format and convert everything else. Michael was way too fast in December and greeted me with shiny Markdown. I wasn’t ready for it, and time goes by. Lately I have been reviewing Jean’s check_nscp_api plugin to finally release it in Icinga 2 v2.7. And so I looked into NSClient++, its API and my longstanding TODO again.

Documentation provides facts and ideally you can walk from top to down, and an API provides so many things to tell. The NSClient API has a bonus: It renders a web interface in your browser too. While thinking about the documentation structure, I could play around with the queries, online configuration and what not.

 

Write and test documentation

Markdown is a markup language. You’ll not only put static text into it, but also use certain patterns and structures to render it in a beautiful representation, just like HTML.

A common approach to render Markdown is seen on GitHub who enriched the original specification and created “GitHub flavoured Markdown“. You can easily edit the documentation online on GitHub and render a preview. Once work is done, you send a pull request in couple of clicks. Very easy 🙂

If you are planning to write a massive amount of documentation with many images added, a local checkout of the git repository and your favourite editor comes in handy. vim handles markdown syntax highlighting already. If you have seen GitHub’s Atom editor, you probably know it has many plugins and features. One of them is to highlight Markdown syntax and to provide a live preview. If you want to do it in your browser, switch to dillinger.io.

NSClient++ uses MKDocs for rendering and building docs. You can start an mkdocs instance locally and test your edits “live”, as you would see them on https://docs.nsclient.org.

Since you always need someone who guides you, the first PR I sent over to Michael was exactly to highlight MKDocs inside the README.md 🙂

 

Already have a solution in mind?

Open the documentation and enhance it. Fix a typo even and help the developers and community members. Don’t move into blaming the developer, that just makes you feel bad. Don’t just wait until someone else will fix it. Not many people love to write documentation.

I kept writing blog posts and wiki articles as my own documentation for things I found over the years. I once stopped with GitHub and Markdown and decided to push my knowledge upstream. Every time I visit the Grafana module for Icinga Web 2 for example, I can use the docs to copy paste configs. Guess what my first contribution to this community project was? 🙂

I gave my word to Michael, and you’ll see how easy it is to write docs for NSClient++ – PR #4.

 

Lessions learned

Documentation is different from writing a book or an article in a magazine. Take the Icinga 2 book as an example: It provides many hints, hides unnecessary facts and pushes you into a story about a company and which services to monitor. This is more focussed on the problem the reader will be solving. That does not mean that pure documentation can’t be easy to read, but still it requires more attention and your desire to try things.

You can extend the knowledge from reading documentation by moving into training sessions or workshops. It’s a good feeling when you discuss the things you’ve just learned at, or having a G&T in the evening. A special flow – which I cannot really describe – happens during OSMC workshops and hackathons or at an Icinga Camp near you. I always have the feeling that I’ve solved so many questions in so little time. That’s more than I could ever write into a documentation chapter or into a thread over at monitoring-portal.org 🙂

Still, I love to write and enhance documentation. This is the initial kickoff for any howto, training or workshop which might follow. We’re making our life so much easier when things are not asked five times, but they’re visible and available as URL somewhere. I’d like to encourage everyone out there to feel the same about documentation.

 

Conclusion

Ever thought about “ah, that is missing” and then forgot to tell anyone? Practice a little and get used to GitHub documentation edits, Atom, vim, MkDocs. Adopt that into your workflow and give something back to your open source project heroes. Marianne is doing great with Icinga 2 documentation already! Once your patch gets merged, that’s pure energy, I tell you 🙂

I’m looking forward to meet Michael at OSMC 2017 again and we will have a G&T together for sure. Oh, Michael, btw – you still need to join your Call for Papers. Could it be something about the API with a link to the newly written docs in the slides please? 🙂

PS: Ask my colleagues here at NETWAYS about customer documentation I keep writing. It simply avoids to explain every little detail in mails, tickets and whatnot. Reduce the stress level and make everyone happy with awesome documentation. That’s my spirit 🙂

Michael Friedrich

Autor: Michael Friedrich

Michael ist seit vielen Jahren Icinga Developer und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht im Monitoring-Portal helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst das schöne Nürnberg. Oder - at an Icinga Camp near you 😉

Monthly Snap May < GitLab CE, Icinga 2, NWS, OSDC, Ubuntu, Graphite, Puppet, SuiteCRM

In May, Achim started with GitLab CE, while Florian presented Vanilla.
Then, Lennart wrote the fifth part of his Icinga Best Practices and Isabel told us about the MultiTech MTD-H5-2.0.
Martin K. continued with external monitoring of websites and services with Icinga 2 while Julia gave a triple hurray to our OSDC-sponsors. Stefan G. gave an insight in his „Mission Loadbalancer Upgrade“ and Blerim tested Ruby applications with RSpec and WebMock.
Lennart published part 3 and 4 of „Icinga 2 – Automated Monitoring with Puppet“, Martin K. showed the benefits of our SuiteCRM Hosting while Marius G. told us the upcoming death of Ubuntu Unity.
May went on with the OSDC in Berlin and the reports from Isabel, Julia, Michi and Dirk.
Then Thomas Widhalm continued with the Carbon Relay for Graphite and Christian wrote about the Icinga 2 Satellite in NWS.
Towards the end of May, Christian announced some new webinars and Gabriel gave an insight in his work at NETWAYS!

Julia Hackbarth

Autor: Julia Hackbarth

Julia ist seit 2015 bei NETWAYS. Sie hat im September ihre Ausbildung zur Kauffrau für Büromanagement gestartet. Etwas zu organisieren macht ihr großen Spaß und sie freut sich auf vielseitige Herausforderungen. In ihrer Freizeit spielt Handball eine große Rolle: Julia steht selbst aktiv auf dem Feld, übernimmt aber auch gerne den Part als Schiedsrichterin.

Flexible and easy log management with Graylog

This week we had the pleasure to welcome Jan Doberstein from Graylog. On Monday our consulting team and myself attended a Graylog workshop held by Jan. Since many of us are already familiar with log management (e.g. Elastic Stack), we’ve skipped the basics and got a deep-dive into the Graylog stack.

You’ll need Elasticsearch, MongoDB and Graylog Server running on your instance and then you are good to go. MongoDB is mainly used for caching and sessions but also as user storage e.g. for dashboards and more. Graylog Server provides a REST API and web interface.

 

Configuration and Inputs

Once you’ve everything up and running, open your browser and log into Graylog. The default entry page greets you with additional tips and tricks. Graylog is all about usability – you are advised to create inputs to send in data from remote. Everything can be configured via the web interface, or the REST API. Jan also told us that some more advanced settings are only available via the REST API.

 

If you need more input plugins, you can search the marketplace and install the required one. Or you’ll create your own. By default Graylog supports GELF, Beats, Syslog, Kafka, AMQP, HTTP.

One thing I also learned during our workshop: Graylog also supports Elastic Beats as input. This allows even more possibilities to integrate existing setups with Icingabeat, filebeat, winlogbeat and more.

 

Authentication

Graylog supports “internal auth” (manual user creation), sessions/tokens and also LDAP/AD. You can configure and test that via the web interface. One thing to note: The LDAP library doesn’t support nested groups for now. You can create and assign specific roles with restrictions. Even multiple providers and their order can be specified.

 

Streams and Alerts

Incoming messages can be routed into so-called “streams”. You can inspect an existing message and create a rule set based on these details. That way you can for example route your Icinga 2 notification events into Graylog and correlate events in defined streams.

Alerts can be defined based on existing streams. The idea is to check for a specific message count and apply threshold rules. Alerts can also be reset after a defined grace period. If you dig deeper, you’ll also recognise the alert notifications which could be Email or HTTP. We’ve also discussed an alert handling which connects to the Icinga 2 API similar to the Logstash Icinga output. Keep your fingers crossed.

 

Dashboards

You can add stream message counters, histograms and more to your own dashboards. Refresh settings and fullscreen mode are available too. You can export and share these dashboards. If you are looking for automated deployments, those dashboards can be imported via the REST API too.

 

Roadmap

Graylog 2.3 is currently in alpha stages and will be released in Summer 2017. We’ve also learned that it will introduce Elasticsearch 5 as backend. This enables Graylog to use the HTTP API instead of “simulating” a cluster node at the moment. The upcoming release also adds support for lookup tables.

 

Try it

I’ve been fixing a bug inside the Icinga 2 GelfWriter feature lately and was looking for a quick test environment. Turns out that the Graylog project offers Docker compose scripts to bring up a fully running instance. I’ve slightly modified the docker-compose.yml to export the default GELF TCP input port 12201 on localhost.

vim docker-compose.yml

version: '2'
services:
mongo:
image: "mongo:3"
elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
graylog:
image: graylog2/server:2.2.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
depends_on:
- mongo
- elasticsearch
ports:
- "9000:9000"
- "12201:12201"


docker-compose up

 

Navigate to http://localhost:9000/system/inputs (admin/admin) and add additional inputs, like “Gelf TCP”.
I just enabled the “gelf” feature in Icinga 2 and pointed it to port 12201. As you can see, there’s some data running into. All screenshots above have been taken from that demo too 😉

 

More?

Jan continued the week with our official two day Graylog training. From a personal view I am really happy to welcome Graylog to our technology stack. I’ve been talking about Graylog with Bernd Ahlers at OSMC 2014 and now am even more excited about the newest additions in v2.x. Hopefully Jan joins us for OSMC 2017, Call for Papers is already open 🙂

My colleagues are already building Graylog clusters and more integrations. Get in touch if you need help with integrating Graylog into your infrastructure stack 🙂

Michael Friedrich

Autor: Michael Friedrich

Michael ist seit vielen Jahren Icinga Developer und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht im Monitoring-Portal helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst das schöne Nürnberg. Oder - at an Icinga Camp near you 😉

Open Source Backup Conference – Why you should submit a talk!

The call for papers for the Open Source Backup Conference from September 25 to September 26 is still running until June 6.

Interested Speakers are invited to submit their presentation proposals via the call for papers form ob osbconf.org.

What kind of talks are we searching for?
Case Studies, Best Practices, Experience Reports as well as latest developments on open source backup solutions

What affects our decision for a talk?
It should bring a real benefit for the attendees, it should not be a promotion lecture

What awaits you in Cologne?
Two days full of open source backup solutions, meeting the open source community and spreading your know how to the other attendees!

 

We would be pleased to get your proposals until June 6!

You’d rather join the OSBConf as an attendee? Get your tickets now!

Julia Hackbarth

Autor: Julia Hackbarth

Julia ist seit 2015 bei NETWAYS. Sie hat im September ihre Ausbildung zur Kauffrau für Büromanagement gestartet. Etwas zu organisieren macht ihr großen Spaß und sie freut sich auf vielseitige Herausforderungen. In ihrer Freizeit spielt Handball eine große Rolle: Julia steht selbst aktiv auf dem Feld, übernimmt aber auch gerne den Part als Schiedsrichterin.