Puppet stored configurations and Icinga

This is the first time I’ve been working with Puppet’s stored configurations. The reason for this was that we wanted to have an automatic Icinga configuration for all hosts managed by Puppet. For those who don’t know: Icinga is a fork of Nagios. Find more information about Icinga here: http://docs.icinga.org/.

There is already various good information covering stored configurations along with Nagios/Icinga out there. For a general introduction into stored configurations, the documentation at Puppetlabs has a good documentation here. A good example of how to configure Puppet to generate Nagios configuration also gives Mike’s Place. And some ‘googling’ will give other resources.

However, there are some small differences since I’m using Icinga and not Nagios. So I thought it might be a good idea to share my experiences in this post. Some information will be redundant, some not. But I hope it will be useful in general.

Make the Puppet master store its configuration

Puppet per default has no capabilities for storing configurations. So one has to make Puppet talk to a database. I just reused an already existing MySQL server on that machine and created a new database along with a new user for it:

[root@bob ~]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
...
mysql> create database puppet;
Query OK, 1 row affected (0.00 sec)

mysql> grant all privileges on puppet.* to puppet@localhost identified by 'xxx';
Query OK, 0 rows affected (0.00 sec)

mysql>

No more work to be done so far with the MySQL server. Puppet master will create the initial structure of the database after restarting. But before, I had to tell Puppet to use stored configurations from now on. So I added the appropriate lines to Puppet’s configuration file puppet.conf:

[puppetmasterd]
      user = puppet
      group = puppet
      reports = puppet_dashboard
      storeconfigs = true
      dbadapter = mysql
      dbuser = puppet
      dbpassword = xxx
      dbserver = localhost
      dbsocket = /var/run/mysqld/mysqld.pid

Here lines 42 through 47 show the new configuration that makes Puppet now store its configuration and which shows Puppet where to store it.

In my case I encountered that I was missing Ruby’s ActionRecord library which lead to an error when restarting the Puppet master. So I had to install it first:

[root@bob puppet]# rpm -i /home/badam/rubygem-activerecord-2.3.8-4.puias6.noarch.rpm

After restarting the Puppet master I ran into this error:

puppetd[2279]: Could not retrieve catalog from remote server: Error 400 on SERVER: undefined method `fact_merge' for nil:NilClass

Seems, there is at least one bug entry on Puppetlabs covering this issue. Since some of the involved people mentioned that simply restarting the Puppet master  made the error disappear, I simply tried this also and – it worked.

Make the client export their Nagios configuration

This code example makes any node including the class icinga::observed say somthing like: “I’d like to have an Icinga monitoring for my host using a service check ‘ping’.

class icinga::observed {

  include icinga::nrpe

  $prefix="(PG): "

  @@nagios_host { $fqdn:
       ensure => present,
       alias => $hostname,
       address => $ipaddress,
       use => "linux-server",
  }

  @@nagios_service { "check_ping_${hostname}":
       check_command => "check_ping!100.0,20%!500.0,60%",
       use => "generic-service",
       host_name => "$fqdn",
       service_description => "${prefix}PING: ${hostname}",
  }
...
}

The prefix “PG:” simply makes it easier to see in Icinga’s GUI if the check was created by Puppet or if it was created manually. Since the service and host names are created using Facter variables, they will always be unique.

Now we need another class that uses the stored configuration from the clients. This class must be used on the Puppet master since it will actually create the configuration files for Icinga.

class icinga::icinga {

$baseconfigdir="/etc/icinga/build-infra"
$conf_file_hosts="hosts_pupp_gen.cfg"
$conf_file_srvs="srvs_pupp_gen.cfg"

Nagios_host <<||>> {

  target => "${baseconfigdir}/${conffile_hosts}",
}

Nagios_service <<||>> {

  target => "${baseconfigdir}/${conf_file_srvs}",
}

...

Since the default path for storing the generated configuration files is /etc/nagios/nagios_host.cfg, which is not what I wanted, I introduced the target attribute which contains the absolute file names:

target => "${baseconfigdir}/${conffile_hosts}",

Now I was ready to to run the Puppet client with its class icinga::observed and Puppet master with icinga::icinga. Let’s see what happened.

Verifying the first export of a client’s data
When the first Puppet client that uses the above configuration (icinga::observed) has executed, one may verify if it really has exported its configuration by having a look at the database:

mysql> select restype,host_id,exported from resources where exported = true;
+----------------+---------+----------+
| restype        | host_id | exported |
+----------------+---------+----------+
| Nagios_host    |       2 |        1 |
| Nagios_service |       2 |        1 |
+----------------+---------+----------+
2 rows in set (0.00 sec)

mysql>

If the exported column is set to true, this indicates that this resource is now available for the Puppet master to be used. This is a stored configuration! So everything looked fine.

Verifying if the right host has reported

In order to see if it was the right host that reported the resource, I had a look at table hosts:

mysql> select name from hosts where id=2;
+-------------------------+
| name                    |
+-------------------------+
|some.example.host.de     |
+-------------------------+
1 row in set (0.00 sec)

mysql>

Verifying the generated Icinga configuration

Now the first part is over, we have a stored configuration within the database. The concrete Icinga configuration is now created as soon as the Puppet is executed on the Puppet master. (I mean: The Puppet client daemon that runs on the host of the Puppet master.)

Having a look at at the files which were configured inside Icinga’s configuration showed:

[icinga@bob icinga]$ l build-infra/
total 40K
drwxrwxr-x. 3 icinga icinga 4.0K Sep 23 10:44 .
drwxrwxr-x. 7 icinga icinga 4.0K Sep 20 17:47 ..
-rw-rw-r--. 1 icinga icinga 7.1K Sep 20 13:38 bob.cfg
-rw-rw-r--. 1 icinga icinga 9.8K Sep 21 15:43 buildservers.cfg
-rw-------. 1 root root&nbsp; 378 Sep 23 10:02 hosts_pupp_gen.cfg
-rw-------. 1 root root&nbsp; 547 Sep 23 09:52 srvs_pupp_gen.cfg
drwxrwxr-x. 6 icinga icinga 4.0K Sep 21 15:43 .svn
[icinga@bob icinga]$

Great! The files were created. However the access rights were such that Icinga (which runs with ‘icinga’ user), was not able to read these files. Therefore I had to change the owner, group and access rights and added an additional file resource to Puppet’s configuration:

class icinga::icinga {

$baseconfigdir="/etc/icinga/build-infra"
$conf_file_hosts="hosts_pupp_gen.cfg"
$conf_file_srvs="srvs_pupp_gen.cfg"

Nagios_host <<||>> {

  target => "${baseconfigdir}/${conffile_hosts}",
}

file { "${baseconfigdir}/${conffile_hosts}":
  ensure => "file",
  owner => "icinga",
  group => "icinga",
  mode => "0644",
}

Nagios_service <<||>> {

  target => "${baseconfigdir}/${conf_file_srvs}",
}

file { "${baseconfigdir}/${conf_file_srvs}":
  ensure => "file",
  owner => "icinga",
  group => "icinga",
  mode => "0644",
}

...

After the next run of the Puppet master everything was fine:

[icinga@bob icinga]$ l build-infra/
total 40K
drwxrwxr-x. 3 icinga icinga 4.0K Sep 23 10:44 .
drwxrwxr-x. 7 icinga icinga 4.0K Sep 20 17:47 ..
-rw-rw-r--. 1 icinga icinga 7.1K Sep 20 13:38 bob.cfg
-rw-rw-r--. 1 icinga icinga 9.8K Sep 21 15:43 buildservers.cfg
-rw-rw-r--. 1 icinga icinga&nbsp; 378 Sep 23 10:02 hosts_pupp_gen.cfg
-rw-rw-r--. 1 icinga icinga&nbsp; 547 Sep 23 09:52 srvs_pupp_gen.cfg
drwxrwxr-x. 6 icinga icinga 4.0K Sep 21 15:43 .svn
[icinga@bob icinga]$

In order to notify Icinga of the new configuration I had to restart it. However it’s always a good idea to check if a new configuration is valid by using Icinga’s ‘-v’ argument. So I did:

[icinga@bob icinga]$ icinga -v icinga.cfg

Icinga 1.3.0
...
Reading configuration data...
Read main config file okay...
Processing object config file '/etc/icinga/objects/commands.cfg'...
Processing object config file '/etc/icinga/objects/contacts.cfg'...
...

Total Warnings: 0
Total Errors: 0

Things look okay - No serious problems were detected during the pre-flight check
[icinga@bob icinga]$

If there are no warnings and errors, one may safely restart Icinga with

service icinga restart

or an alternative command.

Conclusion

For me stored and exported configurations is one of the most exciting features of Puppet since it covers an almost classical scenario: Make Puppet start a service on a client and have Icinga/Nagios monitoring it. With stored configurations you don’t have to touch Icinga’s and Puppet’s configuration. Instead you have it in one place which makes it much easier. Another benefit is that one can have the Puppet generated Icinga configuration in parallel with the one manually created. They don’t interfere.

However there is always room left for some problems. While writing this post we were restructuring our servers. Icinga will now no longer be on the same machine as the Puppet master. So what to do? I already got some hints from Puppet’s mailing list and I think once the solution is done, it will be worth another Blog entry.

, ,

No comments yet.

Leave a Reply

* Copy This Password *

* Type Or Paste Password Here *