Contributing as a Non-Developer

TuxQuite often I get asked how someone can get involved with an Open Source project without or at least minimal development skills. I have some experience in this topic as I am pretty involved with some projects, especially Icinga and Foreman, and mostly not because of my development skills which are existing but at least lack more exercise. So here comes my non exhaustive list of roles a non-developer can engage in a project.

Ambassador

A good point to start is talking about the project. I know this sounds very simple and it is indeed. Spread the word about what you like about and how you use the project and by doing so you help to increase the userbase and with increasing userbase there will be also an increase in developers. But where to start? The internet gives you many opportunities with forums, blogs and other platforms or if you prefer offline communication you can go to local computer clubs, join a user group meeting, help at a conference booth or even send in a paper for a conference. And let the project know about so they can promote your talk.

Tester

Another option for getting involved quite easily is bug reporting. If you start using some software more and more, time will come and you will find a bug so simply report it. Try to find a way to reproduce it, add relevant details, configs and logs, explain why you think it is a bug or missing feature and developers can hopefully fix it. When you like the result and you want to get more involved start testing optional workflows, plugins or similar edge cases you expect to be not well tested because they have a smaller user base. Another option is testing release candidates or even nightly builds so bugs can perhaps be fixed before they hit the masses. If provided take part in test days for new features or versions and test as many scenarios as you can or even go a step further and help organizing a test day. With minimal development know-how you can even dig into the code and help with fixing it, so it is also a good idea if you want to improve this knowledge.

Community Support

If the user base of a project grows, the number of question that are asked every day grows. This can easily reach a number the developers have to spend more time on answering questions than on coding or have to ignore questions, both can cause a project to fail. Experienced users providing their knowledge to newbies can be a big help. So find the way a project wants this kind of questions to be handled and answer questions. It can vary from mailing lists, irc and community forums or panels to flagging issues as questions or if no separate platform is provided community may meet on serverfault, stackoverflow or another common site shared by many projects. Also very important is helping routing communication in the right channel, so help new users also to file a good bug report or tell people to not spam the issue tracker with questions better handled on the community platform.

Documentarist

Projects lacking good documentation are very common and also if the documentation is good it still can be improved by adding examples and howtos. Pull requests improving documentation are likely to be accepted or if the project uses a wiki access is granted, but if you feel more comfortable creating your own source of documentation by writing howtos in your private blog or even creating video tutorials on youtube. Many project will link to it at least in community channels if you let them know about.

Translator

Improving user base by making the software available to more people by providing it in additional languages is always a good thing. But perhaps everyone knows at least one project where selecting a language other than English feels like automatic translation, mix of languages or you would have chosen just different words. Already with some basic knowledge of the software and a feeling for your native language you can help improving translation. In the most cases you do not need special knowledge of tools or translation frameworks as projects try to keep barriers low for translators.

Infrastructure Operator

When projects grow they will need more and more infrastructure for hosting their website, documentation, community platform, CI/CD and build pipeline, repositories and so on. Helping with managing this infrastructure is really a good way to get involved for an ops person. Perhaps it is also possible to donate computing resources (dedicated hardware or a hosted virtual machine) what will be often honored by the project by adding you to the list of sponsors giving you some good publicity.

Specialist

Nowadays a project is not only about the software itself, making installation more easy by providing packages and support for configuration management or more secure by providing security analyses, hardening guides or policies is something system administrators are often more capable than developers. From my experience providing spec files for RPM packaging, Puppet modules or Ansible roles to support automation or a SELinux policy for securing the installation are happily accepted as contribution.

Like I said this list is probably not complete and you can also mix roles very well like starting with talking about the project and filing bugs for your own and perhaps end as community supporter who is well-known for his own blog providing in depth guides, but it should at least give some ideas how you can get involved in open source projects without adding developer to your job description. And if you are using the project in your company’s environment ask your manager if you can get some time assigned for supporting the project, in many cases you will get some.

Dirk Götz

Autor: Dirk Götz

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Open Source Camp Issue #1 – Foreman & Graylog

Open Source Camp Issue #1Right after OSDC we help to organize the Open Source Camp, a brand new serie of events which will give Open Source projects a platform for presenting to the Community. So the event started with a small introduction of the projects covered in the first issue, Foreman and Graylog. For the Foreman part it was Sebastian Gräßl a long term developer who did gave a short overview of Foreman and the community so also people attending for Graylog just know what the other talks are about. Lennart Koopmann who founded Graylog did the same for the other half including upcoming version 3 and all new features.

Tanya Tereshchenko one of the Pulp developers started the sessions with “Manage Your Packages & Create Reproducible Environments using Pulp” giving an update about Pulp 3. To illustrate the workflows covered by Pulp she used the Ansible plugin which will allow to mirror Ansible Galaxy locally and stage the content. Of course Pulp also allows to add your own content to your local version of the Galaxy and serve it to your systems. The other plugins a beta version is already available for Pulp 3 are python to mirror pypi and file for content of any kind, but more are in different development stages.

“An Introduction to Graylog for Security Use Cases” by Lennart Koopmann was about taking the idea of Threadhunting to Graylog by having a plugin providing lookup tables and processing pipeline. In his demo he showed all of this based on eventlogs collected by their honey pot domain controller and I can really recommend the insides you can get with it. I still remember how much work it was getting such things up and running 10 years ago at my former employer with tools like rsyslog and I am very happy about having tools like Graylog nowadays which provide this out of box.

From Sweden came Alexander Olofsson and Magnus Svensson to talk about “Orchestrating Windows deployment with Foreman and WDS”. They being Linux Administrators wanted to give their Windows colleagues a similar experience on a shared infrastructure and shared their journey to reach this goal. They have created a small Foreman Plugin for WDS integration into the provisioning process which got released in its first version. Also being a rather short presentation it started a very interesting discussion as audience were also mostly Linux Administrators but nearly everyone had at least to deal in one way with Windows, too.

My colleague Daniel Neuberger was introducing into Graylog with “Catch your information right! Three ways of filling your Graylog with life.” His talk covered topics from Graylogs architecture, what types of logs exists and how you can get at least the common ones into Graylog. Some very helpful tips from practical experience spiced up the talk like never ever run Graylog as root for being able to get syslog traffic on port 514, if the client can not change the port, your iptables rules can do so. Another one showed fallback configuration for Rsyslog using execOnlyWhenPreviousIsSuspended action. And like me Daniel prefers to not only talk about things but also show them live in a demo, one thing I recommend to people giving a talk as audience will always honor, but keep in mind to always have a fallback.

Timo Goebel started the afternoon sessions with “Foreman: Unboxing” and like in a traditional unboxing he showed all the plugins Filiadata has added to their highly customized Foreman installation. This covered integration of omaha (the update management of coreos), rescue mode for systems, VMware status checking, distributed lock management to help with automatic updates in cluster setups, Spacewalk integration they use for SUSE Manager managed systems, host expiration which helps to keep your environment tidy, monitoring integration and the one he is currently working on which provides cloud-init templates during cloning virtual machines in VMware from templates.

Jan Doberstein did exactly what you can expect from a talk called “Graylog Processing Pipelines Deep Dive”. Being Support engineer at Graylog for several years now his advice is coming from experience in many different customer environments and while statements like “keep it simple and stupid” are made often they stay true but also unheard by many. Those pipelines are really powerful especially when done in a good way, even more when they can be included and shared via content packs with Version 3.

Matthias Dellweg one of those guys from AITX who brought Debian support to Pulp and Katello talked about errata support for it in his talk “Errare Humanum Est”. He started by explaining the state of errata in RPM and differences in the DEB world. Afterwards he showed the state of their proof of concept which looks like a big improvement bringing DEB support in Katello to the same level like RPM.

“How to manage Windows Eventlogs” was brought to the audience by Rico Spiesberger with support by Daniel. The diversity of the environment brought some challenges to them which they wanted to solve with monitoring the logs for events that history proved to be problematic. Collecting the events from over 120 Active Directory Servers in over 40 countries generates now over 46 billion documents in Graylog a day and good idea about what is going on. No such big numbers but even more detailed dashboards were created for the Certificate Authority. Expect all their work to be available as content pack when it is able to export them with Graylog 3.

Last but not least Ewoud Kohl van Wijngaarden told us the story about software going the way “From git repo to package” in the Foreman Project. Seeing all the work for covering different operating systems and software versions for Foreman and the big amount of plugins or even more for Katello and all the dependencies is great and explains why sometimes things take longer, but always show a high quality.

I think it was a really great event which not only I enjoyed from the feedback I got. I really like about the format that talks are diving deeper into the projects than most other events can do and looking forward for the next issue. Thanks to all the speakers and attendees, safe travels home to everyone.

Dirk Götz

Autor: Dirk Götz

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

The Future of Open Source Data Center Solutions – OSDC 2018 – Day 2

The evening event was a great success. While some enjoyed the great view from the Puro Skybar, others liked the food and drinks even more and at least I preferred the networking. I joined some very interesting discussions about very specific Information Technology tools, work life balance, differences between countries and cultures and so on. So thanks to all starting with our event team to the attendees for a great evening.

But also a great evening and a short night did not keep me and many others from joining Walter Gildersleeve for the first talk about “Puppet and the Road to Pervasive Automation”. He introduced the new tools from Puppet to improve the Configuration management experience like Puppet Discovery, Pipelines and Tasks. What I liked about his demos about Tasks was that he was showing of course what the Enterprise version could do, but also what the Open Source version is capable of. Pipelines is Puppet’s CI/CD solution which can be used as SaaS or on premise and at least I have to commit it looks very nice and informative. If you want to give it a try, you can sign up for a free account and test it with a limited number of nodes.

Second one today was Matt Jarvis with his talk “From batch to pipelines – why Apache Mesos and DC/OS are a solution for emerging patterns in data processing”. Like several others he started with the history from mainframes via hardware partitioning and virtualization to microservices running in containers. After this introduction he started to dig deeper into Container Orchestration and changes in modern application design which add complexity which they wanted to solve with Mesos. Matt then has given a really good overview on different aspects of the Mesos ecosystem and DC/OS. This being quite a complex topic a list of all the topics covered would be quite exhaustive list, but just to mention some he covered Service Discovery or Load Balancing for example.

Michael Ströder who I know as great specialist for secure authentication by working with him at one customer in the past introduced “Æ-DIR — Authorized Entities Directory” to the crowd. You already could see his experience when he was talking about goals and paradigms applied during development which resulted in the 2-tier architecture of Æ-DIR consisting of a writable provider and readable consumer with separated access based on roles. Installation is quite easy with a provided Ansible role and results in a very secure setup which I really like for central service like Authentication. The shown customer scenarios using features like SSH proxy authz and two factor authentication with Yuibkey make Æ-DIR sound like a really production ready solution. If you want to have a look into without installing it, a demo is provided on the projects webpage.

First talk after lunch was “Git Things Done With GitLab!” by my colleagues Gabriel Hartmann and Nicole Lang about Gitlab and why it was chosen by NETWAYS for inclusion in our Webservices. Nicole gave a very good explanation about basic function which Gabriel showed live in a demo followed by a cherry pick of nice features provided by Gitlab. Also these features like Issue tracker and CI/CD were shown live. I was really excited by the beta of AutoDevops which allows you to get CI/CD up and running very easy.

Thomas Fricke’s talk “Three Years Running Containers with Kubernetes in Production” was a very good talk about things you should know before moving container and container orchestration into production. But while it was a interesting talk I had to prepare for my own because I was giving the last talk of the day about “Katello: Adding content management to Foreman” which was primarily demos showing all the basic parts.

It was a great conference again this year, I really want to thank all the speakers, attendees and sponsors who made this possible. I am looking forward for more interesting and even more technical talks at the Open Source Camp tomorrow, but wish save travels to all those leaving today and hope to see you next year on May 14-15.

Dirk Götz

Autor: Dirk Götz

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

The Future of Open Source Data Center Solutions – OSDC 2018 – Day 1

Now for the fourth time OSDC started in Berlin with a warm Welcome from Bernd and a fully packed room with approximately 140 attendees. This year we made a small change to the schedule by doing away with the workshop day and having an additional smaller conference afterwards. The Open Source Camp will be on Foreman and Graylog, but more on this on Thursday.

First talk was Mitchell Hashimoto with “Extending Terraform for Anything as Code” who started by showing how automation evolved in information technology and explained why it is so important before diving into Terraform. Terraform provides a declarative language to automate everything providing an API, a plan command to get the required changes before you then apply all this changes. While this is quite easy to understand for something like infrastructure Mitchell showed how the number of possibilities grew with Software-as-a-Service and now everything having an API. One example was how HashiCorp handles employees and their permissions with Terraform. After the examples for how you can use existing stuff he gave an introduction to extending Terraform with custom providers.

Second was “Hardware-level data-center monitoring with Prometheus” presented by Conrad Hoffmann who gave us some look inside of the datacenter of Soundcloud and their monitoring infrastructure before Prometheus which looked like a zoo. Afterwards he highlighted the key features why they moved to Prometheus and Grafana for displaying the collected data. In his section about exporters he got into details which exporter replaced which tools from the former zoo and gave some tips from practical experience. And last but not least he summarized the migration and why it was worth to do it as it gave them a more consistent monitoring solution.

Martin Schurz and Sebastian Gumprich teamed up to talk about “Spicing up VMWare with Ansible and InSpec”. They started by looking back to the old days they had only special servers and later on virtual machines manually managed, how this slowly improved by using managing tools from VMware and how it looks now with their current mantra “manual work is a bug!”. They showed example playbooks for provisioning the complete stack from virtual switch to virtual machine, hardening according their requirements and management of the components afterwards. Last but not least for the Ansible part they described how they implemented the Python code to have an Ansible module for moving virtual machines between datastores and hosts. For testing all this automation they use inSpec and the management requiring some tracking of the environment was solved using Ansible-CMDB.

After lunch break I visited the talk about “OPNsense: the “open” firewall for your datacenter” given by Thomas Niedermeier. OPNsense is a HardenedBSD-based Open Source Firewall including a nice configuration web interface, Spamhouse blocklists, Intrusion Prevention System and many more features. I think with all these features OPNsense has not to avoid comparison with commercial firewalls and if enterprise-grade support is required partners like Thomas Krenn are available, too.

Martin Alfke asked the question “Ops hates containers. Why?” he came around in a customer meeting. Based on this experience he started to demystify containers in a very entertaining and memorable way. He focused on giving OPS some tips and ideas about what you should learn before even thinking about having container in production or during implementing your own container management platform. As we do recording I really recommend you to have a look into the video of the talk when recordings are up in a few days.

Anton Babenko in his talk “Lifecycle of a resource. Codifying infrastructure with Terraform for the future” started were Mitchell’s talk ended and dived really deep into module design and development for Terraform. Me being not very familiar with Terraform he at least could convince me that it seems possible to write well designed code for it and it makes fun to experiment and improve with your own modules. Furthermore he gave tips for handling the next Terraform release and testing code during refactoring which are probably very useful for module authors.

“The Computer Science behind a modern distributed data store” by Max Neunhöffer did a very good job explaining theory used in cluster election and consensus. The second topic covered was sorting of data and how modern technology changed how we have to look at sorting algorithm. Log structured merge trees as the third topic of the talk are a great way to improve write performance and with applying some additional tricks also read performance used by many database solutions. Fourth section was about Hybrid Logical Clocks to solve the problem of system clocks differing. Last but not least Max talked about Distributed ACID Transactions (Atomic Consistent Isolated Durable) which are important to keep data consistent but are quite harder to achieve in distributed systems. It was really a great talk while only covering theoretical computer science Max made it very easy to understand at least basic levels and presented it in way getting people interested in those topics.

After this first day full of great talks we will have the evening event in a sky bar having a good view of Berlin, more food, drinks and conversations. This networking is perhaps one of the most interesting parts of conferences. I will be back with a short review of the evening event and day 2 tomorrow evening. If you want to have more details and a more live experience follow #osdc on Twitter.

Dirk Götz

Autor: Dirk Götz

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Systemd-Unitfiles und Multi-Instanz-Setups

Tux

Vor einer Weile hab ich bereits eine kurze Erklärung zu Systemd-Unitfiles geschrieben, diesmal will ich auf das Multi-Instanz-Feature von Systemd eingehen, da auch dieses anscheinend nicht jedem bekannt ist.

Als Beispiel soll mir diesmal Graphite bzw. genauer gesagt die Python-Implementierung carbon-cache dienen. Diese skaliert nicht automatisch, sondern erfordert, dass man weitere Instanzen des Dienstes auf anderen Ports startet. Die Konfiguration auf Seiten des carbon-cache ist hierbei recht simpel, denn es wird in einer Ini-Datei nur eine neue Sektion mit den zu überschreibenden Werten geschrieben. Der Sektions-Name gibt hierbei der Instanz den Namen vor. Das ganze sieht dann beispielsweise für Instanz b so aus.

[cache:b]
LINE_RECEIVER_PORT = 2013
UDP_RECEIVER_PORT = 2013
PICKLE_RECEIVER_PORT = 2014
LINE_RECEIVER_PORT = 7102

Mit SystemV hätte man nun das Startskript für jede Instanz kopieren und anpassen müssen, da das mitgelieferte Unitfile leider das Multi-Instanz-Feature nicht nutzt muss ich dies zwar auch einmal tun, aber immerhin nur einmal. Hierbei ist es sinnvoll den Namen zu verändern, um nicht mit dem bestehen in Konflikt zu kommen, wenn man möchte kann man es aber auch durch Verwendung des gleichen Namens “überschreiben”. Für das Multi-Instanz-Feature muss nur ein @ an das Ende des Namens. Baut man nun an entsprechenden Stellen den Platzhalter %i ist auch schon das Multi-Instanz-Setup fertig und man kann einen Dienst mit name@instanz.service starten. In meinem Beispiel wäre dies carbon-cache-instance@b.service mit folgendem Unitfile unter /etc/systemd/system/carbon-cache-instance@.service.

[Unit]
Description=Graphite Carbon Cache Instance %i
After=network.target
 
[Service]
Type=forking
StandardOutput=syslog
StandardError=syslog
ExecStart=/usr/bin/carbon-cache --config=/etc/carbon/carbon.conf --pidfile=/var/run/carbon-cache-%i.pid --logdir=/var/log/carbon/ --instance=%i start
ExecReload=/bin/kill -USR1 $MAINPID
PIDFile=/var/run/carbon-cache-%i.pid
 
[Install]
WantedBy=multi-user.target

Ich hoffe diese kurze Erklärung hilft dem ein oder anderen und ich würde mich freuen zukünftig mehr Dienste, die auf Instanzierung ausgelegt sind, bereits mit einem entsprechenden Unitfile ausgeliefert zu sehen.

Dirk Götz

Autor: Dirk Götz

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.