Project

General

Profile

« Previous | Next » 

Revision 843fca5c

Added by Alexis Mousset about 7 years ago

Fixes #10350: Change page split level and reorganize sections

View differences:

00_introduction/20_architecture.txt
=== Rudder components
==== Rudder components
The Rudder infrastructure uses three types of machines:
10_installation/11_install_agent/00_install_agent.txt
For a machine to become a managed Node, you have to install the Rudder Agent on it.
The Node will afterwards register itself on the server. And finally, the Node should
be acknowledged in the Rudder Server interface to become a managed Node. For a more detailed
description of the workflow, please refer to the <<advancedusage,Advanced Usage>> part
description of the workflow, please refer to the TODO part
of this documentation.
[NOTE]
12_upgrade/00_upgrade.txt
upgrade procedures manually, but you will note that several data migrations
occur during the upgrade process.
==== Upgrade from Rudder 3.1, 3.2 or 4.0
Migration from 3.1, 3.2 or 4.0 are supported, so you can upgrade directly to 4.1.
[WARNING]
====
In Rudder 4.0, we changed the default communication protocol between agent and server,
but still stay compatible with the old protocol. Hence, you can perfectly keep using
pre-4.0 agents with a 4.0 or 4.1 server.
However, some networking issues may appear when using 4.0 or 4.1 agents with older servers with the
reverse DNS lookup option disabled in the settings (*Security* ->
*Use reverse DNS lookups on nodes to reinforce authentication to policy server*).
Therefore, you need to upgrade your server to 4.1 *before* upgrading the nodes so that the
configuration distributed to the nodes include the use of the new protocol.
====
==== Upgrade from Rudder 3.0 or older
Direct upgrades from 3.0.x and older are no longer supported on 4.1.
If you are still running one of those, either on servers or nodes,
please first upgrade to one of the supported versions above, and then upgrade to 4.1.
12_upgrade/05_caution.txt
=== Caution cases
=== Upgrade notes
==== Upgrade from Rudder 3.0 or older
Direct upgrades from 3.0.x and older are no longer supported on 4.1.
If you are still running one of those, either on servers or nodes,
please first upgrade to one of the supported versions above, and then upgrade to 4.1.
==== Upgrade from Rudder 3.1, 3.2 or 4.0
Migration from 3.1, 3.2 or 4.0 are supported, so you can upgrade directly to 4.1.
[WARNING]
====
In Rudder 4.0, we changed the default communication protocol between agent and server,
but still stay compatible with the old protocol. Hence, you can perfectly keep using
pre-4.0 agents with a 4.0 or 4.1 server.
However, some networking issues may appear when using 4.0 or 4.1 agents with older servers with the
reverse DNS lookup option disabled in the settings (*Security* ->
*Use reverse DNS lookups on nodes to reinforce authentication to policy server*).
Therefore, you need to upgrade your server to 4.1 *before* upgrading the nodes so that the
configuration distributed to the nodes include the use of the new protocol.
====
==== Compatibility between Rudder agent 4.1 and older server versions
......
reporting on the server.
Read <<_rsyslog, perfomance notes>> about rsyslog for detailed information.
==== Upgrade manually installed relays (installed before 3.0)
With Rudder 2.11, there were no relay package and the configuration had to be done by hand.
To migrate a manually installed relay to 3.1 using the package, run the following intructions:
* Delete the previous Apache configuration file:
** `/etc/httpd/conf.d/rudder-default.conf file` on RHEL-like
** `/etc/apache2/sites-enabled/rudder-default` file on Debian-like
** `/etc/apache2/vhosts.d/rudder-default.conf` file on SuSE
* Install the relay package named *rudder-server-relay*.
This is enough to replace the relay configuration, and no change is needed on the root server.
==== Known issues
* After upgrade, if the web interface has display problems, empty your navigator cache and/or logout/login.
12_upgrade/70_migrating_relays.txt
=== Upgrade manually installed relays
With Rudder 2.11, there were no relay package and the configuration had to be done by hand.
To migrate a manually installed relay to 3.1 using the package, run the following intructions:
* Delete the previous Apache configuration file:
** `/etc/httpd/conf.d/rudder-default.conf file` on RHEL-like
** `/etc/apache2/sites-enabled/rudder-default` file on Debian-like
** `/etc/apache2/vhosts.d/rudder-default.conf` file on SuSE
* Install the relay package named *rudder-server-relay*.
This is enough to replace the relay configuration, and no change is needed on the root server.
20_usage/00_usage_intro.txt
== Rudder Web Interface
== Web interface usage
This chapter is a general presentation of the Rudder Web Interface. You will
find how to authenticate in the application, a description of the design of the
21_node_management/20_node_management.txt
== Node Management
== Node management
[[inventory, Node Inventory]]
=== Node Inventory
=== Node inventory
Rudder integrates a node inventory tool which harvest useful information
about the nodes. This information is used by Rudder to handle the nodes, and
21_node_management/22_search_nodes.txt
[[quick-search, Quick Search]]
==== Quick Search
==== Quick search
You might have noticed the small text area at the top of the Rudder interface:
it is the Quick Search bar. Its purpose is to enable a user to easily search for
22_configuration_management/30_configuration_management.txt
== Configuration Management
== Configuration concepts
22_configuration_management/31_techniques.txt
====
==== Available Techniques
22_configuration_management/32_list_of_techniques.txt
// FIXME: this list should be generated from PT source code see
// https://redmine.normation.com/issues/1621 when it is done, uncomment following
// line and delete uneeded paragraphs: include::../temp/available_pt.txt
===== Application management
Apache 2 HTTP server:: This Policy Template will configure the Apache HTTP
server and ensure it is running. It will ensure the "apache2" package is
installed (via the appropriate packaging tool for each OS), ensure the service
is running and start it if not and ensure the service is configured to run on
initial system startup. Configuration will create a rudder vhost file.
APT package manager configuration:: Configure the apt-get and aptitude tools on
GNU/Linux Debian and Ubuntu, especially the source repositories.
OpenVPN client:: This Policy Template will configure the OpenVPN client service
and ensure it is running. It will ensure the "openvpn" package is installed (via
the appropriate packaging tool for each OS), ensure the service is running and
start it if not and ensure the service is configured to run on initial system
startup. Configuration will create a rudder.conf file. As of this version, only
the PSK peer identification method is supported, please use the "Download File"
Policy Template to distribute the secret key.
Package management for Debian / Ubuntu / APT based systems:: Install, update or
delete packages, automatically and consistently on GNU/Linux Debian and Ubuntu.
Package management for RHEL / CentOS / RPM based systems:: Install, update or
delete packages, automatically and consistently on GNU/Linux CentOS and RHEL.
===== Distributing files
Copy a file:: Copy a file on the machine
Distribute ssh keys:: Distribute ssh keys on servers
Download a file:: Download a file for a standard URL (HTTP/FTP), and set
permissions on the downloaded file.
===== File state configuration
Set the permissions of files:: Set the permissions of files
===== System settings: Miscellaneous
Time settings:: Set up the time zone, the NTP server, and the frequency of time
synchronisation to the hardware clock. Also ensures that the NTP service is
installed and started.
===== System settings: Networking
Hosts settings:: Configure the contents of the hosts filed on any operating
system (Linux and Windows).
IPv4 routing management:: Control IPv4 routing on any system (Linux and
Windows), with four possible actions: add, delete (changes will be made), check
presence or check absence (a warning may be returned, but no changes will be
made) for a given route.
Name resolution:: Set up the IP address of the DNS server name, and the default
search domain.
NFS Server:: Configure a NFS server
===== System settings: Process
Process Management:: Enforce defined parameters on system processes
===== System settings: Remote access
OpenSSH server:: Install and set up the SSH service on Linux nodes. Many
parameters are available.
===== System settings: User management
Group management:: This Policy Template manages the target host(s) groups. It
will ensure that the defined groups are present on the system.
Sudo utility configuration:: This Policy Template configures the sudo utility.
It will ensure that the defined rights for given users and groups are correctly
defined.
User management:: Control users on any system (Linux and Windows), including
passwords, with four possible actions: add, delete (changes will be made), check
presence or check absence (a warning may be returned, but no changes will be
made) for a given user.
22_configuration_management/37_validation_workflow.txt
image::./images/workflows/States.png[]
===== Change request management page
==== Change request management page
All Change requests can be seen on the /secure/utilities/changeRequests page.
There is a table containing all requests, you can access to each of them by clicking on their id.
23_manage_your_it/00_manage_your_it.txt
[[_manage_your_it]]
== Configuration Policies
== Configuration policies
23_manage_your_it/5_usecases/0_usecases_intro.txt
=== Usecases
This chapter gives a few examples for using Rudder. We have no doubt that you'll
have your own ideas, that we're impatient to hear about...
==== Dynamic groups by operating system
Create dynamic groups for each operating system you administer, so that you can
apply specific policies to each type of OS. When new nodes are added to Rudder,
these policies will automatically be enforced upon them.
==== Library of preventive policies
Why not create policies for emergency situations in advance? You can then put
your IT infrastructure in "panic" mode in just a few clicks.
For example, using the provided Techniques, you could create a Name
resolution Directive to use your own internal DNS servers for normal situations,
and a second, alternative Directive, to use Google's public DNS servers, in case
your internal DNS servers are no longer available.
==== Standardizing configurations
You certainly have your own best practices (let's call them good habits) for
setting up your SSH servers.
But is that configuration the same on all your servers? Enforce the settings
your really want using an OpenSSH server policy and apply it to all your Linux
servers. SSH servers can then be stopped or reconfigured manually many times,
Rudder will always restore your preferred settings and restart the SSH server in
less than 5 minutes.
==== Using Rudder as an Audit tool
Using Rudder as an Audit tool is useful if you do not want to make any changes on the system,
temporarily (freeze period, etc.) or permanently.
To use Rudder as an Audit tool without modifying any configuration on your systems,
set the Policy Mode to *Audit* in the Settings, and do not allow overriding.
==== Using Audit mode to validate a policy before applying it
Before applying a configuration policy to some systems (a new policy or a new system),
you can switch the policy mode of the directive defining this policy or of the nodes
it is applied to to *Audit*.
This is particularly useful when adding rules to enforce policies that are supposed to be already applied:
you can measure the gap between expected and actual state, and check what changes would be made before applying them.
25_administration/00_administration_intro.txt
== Administration
This chapter covers basic administration task of Rudder services like
configuring some parameters of the Rudder policy server, reading the services
log, and starting, stopping or restarting Rudder services.
25_administration/10_archives.txt
[[archives, Archives]]
=== Archives
==== Archive usecases
The archive feature of Rudder allows to:
* Exchange configuration between multiple Rudder instances, in particular when
having distinct environments;
* Keep an history of major changes.
===== Changes testing
Export the current configuration of Rudder before you begin to make any change
you have to test: if anything goes wrong, you can return to this archived state.
===== Changes qualification
Assuming you have multiple Rudder instances, each on dedicated for the
development, qualification and production environment. You can prepare the
changes on the development instance, export an archive, deploy this archive on
the qualification environment, then on the production environment.
.Versions of the Rudder servers
[WARNING]
===========
If you want to export and import configurations between environments, the version
of the source and target Rudder server must be exactly the same. If the versions
don't match (even if only the minor versions are different), there is a risk that
the import will break the configuration on the target Rudder server.
===========
==== Concepts
In the 'Administration > Archives' section of the Rudder Server web interface, you
can export and import the configuration of Rudder Groups, Directives and Rules.
You can either archive the complete configuration, or only the subset dedicated
to Groups, Directives or Rules.
When archiving configuration, a 'git tag' is created into +/var/rudder/configuration-repository+.
This tag is then referenced in the Rudder web interface, and available for download
as a zip file. Please note that each change in the Rudder web interface is also
committed in the repository.
The content of this repository can be imported into any Rudder server (with the same version).
==== Archiving
To archive Rudder Rules, Groups, Directives, or make a global archive, you need to go to
the 'Administration > Archives' section of the Rudder Server web interface.
To perform a global archive, the steps are:
. Click on 'Archive everything' - it will update the drop down list 'Choose an archive' with
the latest data
. In the drop down list 'Choose an archive', select the newly created archive (archives are sorted
by date), for example 2015-01-08 16:39
. Click on 'Download as zip' to download an archive that will contains all elements.
==== Importing configuration
On the target server, importing the configuration will "merge" them with the existing configuration:
every groups, rules, directives or techniques with the same identifier will be replaced by the import,
and all others will remain untouched.
To import the archive on the target Rudder server, you can follow the following steps:
. Uncompress the zip archive in /var/rudder/configuration-repository
. If necessary, correct all files permissions: +chown -R root:rudder directives groups parameters ruleCategories rules techniques+
. Add all files in the git repository: +git add . && git commit -am "Importing configuration"+
. Finally, in the Web interface, go to the 'Administration > Archives' section, and select
'Latest Git commit' in the drop down list in the Global archive section, and click on 'Restore
everything' to restore the configuration.
[TIP]
====
You can also perform the synchronisation from on environment to another by
using git, through a unique git repository referenced on both environment.
For instance, using one unique git repository you can follow this workflow:
. On Rudder test:
.. Use Rudder web interface to prepare your policy;
.. Create an archive;
.. +git push+ to the central repository;
. On Rudder production:
.. +git pull+ from the central repository;
.. Use Rudder web interface to import the qualified archive.
====
==== Deploy a preconfigured instance
You can use the procedures of Archiving and Restoring configuration to deploy
preconfigured instance. You would prepare first in your labs the configuration for
Groups, Directives and Rules, create an Archive, and import the Archive on the
new Rudder server installation
25_administration/10_event_logs.txt
=== Event Logs
Every action happening in the Rudder web interface are logged in the
PostgreSQL database. The last 1000 event log entries are displayed in the
*Administration > View Event Logs* section of Rudder web application. Each
log item is described by its 'ID', 'Date', 'Actor', and 'Event' 'Type',
'Category' and 'Description'. For the most complex events, like changes in
nodes, groups, techniques, directives, deployments, more details can be
displayed by clicking on the event log line.
Event Categories::
* User Authentication
* Application
* Configuration Rules
* Policy
* Technique
* Policy Deployment
* Node Group
* Nodes
* Rudder Agents
* Policy Node
* Archives
25_administration/20_policy_server.txt
=== Policy Server
The *Administration > Policy Server Management* section sum-up information about
Rudder policy server and its parameters.
==== Configure allowed networks
Here you can configure the networks from which nodes are allowed to connect to
Rudder policy server to get their updated rules.
You can add as many networks as you want, the expected format is:
+networkip/mask+, for example +42.42.0.0/16+.
==== Clear caches
Clear cached data, like node configuration. That will trigger a full
redeployment, with regeneration of all promises files.
==== Reload dynamic groups
Reload dynamic groups, so that new nodes and their inventories are taken into
account. Normally, dynamic group are automatically reloaded unless that feature
is explicitly disable in Rudder configuration file.
25_administration/30_plugins.txt
=== Plugins
Rudder is an extensible software. The *Administration > Plugin Management*
section sum-up information about loaded plugins, their version and their
configuration.
A plugin is a JAR archive. The web application must be restarted after
installation of a plugin.
==== Install a plugin
To install a plugin, copy the JAR file and the configuration file in the
according directories.
+/opt/rudder/share/rudder-plugins/+::
This directory contains the JAR files of the plugins.
+/opt/rudder/etc/plugins/+::
This directory contains the configuration files of the plugins.
Then, register the plugin, using its name without the ".jar" extension and
restart Rudder:
----
# register plugin
/opt/rudder/bin/rudder-plugin register plugin-name-without-jar-extension
# restart Rudder
/etc/init.d/rudder-jetty restart
----
25_administration/50_services_administration.txt
=== Basic administration of Rudder services
==== Restart the agent of the node
To restart the Rudder Agent, use following command on a node:
----
service rudder-agent restart
----
[TIP]
====
This command can take more than one minute to restart the CFEngine daemon.
This is not a bug, but an internal protection system of CFEngine.
====
==== Restart the root rudder service
===== Restart everything
You can restart all components of the Rudder Root Server at once:
----
service rudder restart
----
===== Restart only one component
Here is the list of the components of the root server with a brief description
of their role, and the command to restart them:
include::../glossary/cfengine-server.txt[]
----
service rudder-agent restart
----
include::../glossary/web-server-application.txt[]
----
service rudder-jetty restart
----
include::../glossary/web-server-front-end.txt[]
----
service apache2 restart
----
include::../glossary/ldap-server.txt[]
----
service rudder-slapd restart
----
include::../glossary/sql-server.txt[]
----
service postgresql* restart
----
25_administration/70_system_password_management.txt
=== Password upgrade
This version of Rudder uses a central file to manage the passwords that will
be used by the application: /opt/rudder/etc/rudder-passwords.conf
When first installing Rudder, this file is initialized with default values,
and when you run rudder-init, it will be updated with randomly generated
passwords.
On the majority of cases, this is fine, however you might want to adjust the
passwords manually. This is possible, just be cautious when editing the file,
as if you corrupt it Rudder will not be able to operate correctly anymore and
will spit numerous errors in the program logs.
As of now, this file follows a simple syntax: ELEMENT:password
You are able to configure three passwords in it: The OpenLDAP one, the
PostgreSQL one and the authenticated WebDAV one.
If you edit this file, Rudder will take care of applying the new passwords
everywhere it is needed, however it will restart the application automatically
when finished, so take care of notifying users of potential downtime before
editing passwords.
Here is a sample command to regenerate the WebDAV password with a random
password, that is portable on all supported systems. Just change the
"RUDDER_WEBDAV_PASSWORD" to any password file statement corresponding to
the password you want to change.
----
sed -i s/RUDDER_WEBDAV_PASSWORD.*/RUDDER_WEBDAV_PASSWORD:$(dd if=/dev/urandom count=128 bs=1 2>&1 | md5sum | cut -b-12)/ /opt/rudder/etc/rudder-passwords.conf
----
25_administration/80_user_management.txt
[[user-management]]
=== User management
Change the users authorized to connect to the application.
You can define authorization level for each user
==== Configuration of the users using a XML file
===== Generality
The credentials of a user are defined in the XML file
+/opt/rudder/etc/rudder-users.xml+. This file expects the following format:
----
<authentication hash="sha512">
<user name="alice" password="xxxxxxx" role="administrator"/>
<user name="bob" password="xxxxxxx" role="administration_only, node_read"/>
<user name="custom" password="xxxxxxx" role="node_read,node_write,configuration_read,rule_read,rule_edit,directive_read,technique_read"/>
</authentication>
----
The name and password attributes are mandatory (non empty) for the user tags.
The role attribute can be omitted but the user will have no permission, and
only valid attributes are recognized.
Every modification of this file should be followed by a restart of the Rudder
web application to be taken into account:
----
service rudder-jetty restart
----
===== Passwords
The authentication tag should have a "hash" attribute, making "password" attributes
on every user expect hashed passwords. Not specifying a hash attribute will fallback
to plain text passwords, but it is strongly advised not to do so for security reasons.
The algorithm to be used to create the hash (and verify it during authentication)
depend on the value of the hash attribute. The possible values, the
corresponding algorithm and the Linux shell command need to obtain the hash of
the "secret" password for this algorithm are listed here:
.Hashed passwords algorithms list
[options="header"]
|====
|Value | Algorithm | Linux command to hash the password
|"md5" | MD5 | +read mypass; echo -n $mypass \| md5sum+
|"sha" or "sha1" | SHA1 | +read mypass; echo -n $mypass \| shasum+
|"sha256" or "sha-256" | SHA256 | +read mypass; echo -n $mypass \| sha256sum+
|"sha512" or "sha-512" | SHA512 | +read mypass; echo -n $mypass \| sha512sum+
|====
When using the suggested commands to hash a password, you must enter the
command, then type your password, and hit return. The hash will then be
displayed in your terminal. This avoids storing the password in your shell
history.
Here is an example of authentication file with hashed password:
----
<authentication hash="sha256">
<!-- In this example, the hashed password is: "secret", hashed as a sha256 value -->
<user name="carol" password="2bb80d537b1da3e38bd30361aa855686bde0eacd7162fef6a25fe97bf527a25b" role="administrator"/>
</authentication>
----
[[ldap-auth-provider, LDAP authentication provider for Rudder]]
==== Configuring an LDAP authentication provider for Rudder
If you are operating on a corporate network or want to have your users in a
centralized database, you can enable LDAP authentication for Rudder users.
===== LDAP is only for authentication
Take care of the following limitation of the current process: only *authentication*
is delegated to LDAP, NOT *authorizations*. So you still have to
declare user's authorizations in the Rudder user file (rudder-users.xml).
A user whose authentication is accepted by LDAP but not declared in the
rudder-users.xml file is considered to have no rights at all (and so will
only see a reduced version of Rudder homepage, with no action nor tabs available).
The credentials of a user are defined in the XML file
+/opt/rudder/etc/rudder-users.xml+. It expects the same format as regular file-based
user login, but in this case "name" will be the login used to connect to LDAP and the
'password' field will be ignored and should be set to "LDAP" to make it clear that
this Rudder installation uses LDAP to log users in.
Every modification of this file should be followed by a restart of the Rudder
web application to be taken into account:
----
service rudder-jetty restart
----
===== Enable LDAP authentication
LDAP authentication is enabled by setting the property +rudder.auth.ldap.enable+ to +true+
in file +/opt/rudder/etc/rudder-web.properties+
The LDAP authentication process is a bind/search/rebind in which an application
connection (bind) is used to search (search) for a user entry given some base and
filter parameters, and then, a bind (rebind) is tried on that entry with the
credential provided by the user.
So next, you have to set-up the connection parameters to the LDAP directory to use.
There are five properties to change:
- rudder.auth.ldap.connection.url
- rudder.auth.ldap.connection.bind.dn
- rudder.auth.ldap.connection.bind.password
- rudder.auth.ldap.searchbase
- rudder.auth.ldap.filter
The search base and filter are used to find the user. The search base may be left empty, and
in the filter, {0} will be replaced by the value provided as user login.
Here are some usage examples,
on standard LDAP:
----
rudder.auth.ldap.searchbase=ou=People
rudder.auth.ldap.filter=(&(uid={0})(objectclass=person))
----
on Active Directory:
----
rudder.auth.ldap.searchbase=
rudder.auth.ldap.filter=(&(sAMAccountName={0})(objectclass=user))
----
==== Authorization management
For every user you can define an access level, allowing it to access different
pages or to perform different actions depending on its level.
You can also build custom roles with whatever permission you want, using a type
and a level as specified below.
In the xml file, the role attribute is a list of permissions/roles, separated by
a comma. Each one adds permissions to the user. If one is wrong, or not correctly
spelled, the user is set to the lowest rights (NoRights), having access only to the
dashboard and nothing else.
===== Pre-defined roles
|====
|Name | Access level
|administrator | All authorizations granted, can access and modify everything
|administration_only | Only access to administration part of rudder, can do everything within it.
|user | Can access and modify everything but the administration part
|configuration | Can only access and act on configuration section
|read_only | Can access to every read only part, can perform no action
|inventory | Access to information about nodes, can see their inventory, but can't act on them
|rule_only | Access to information about rules, but can't modify them
|====
For each user you can define more than one role, each role adding its authorization to the user.
Example: "rule_only,administration_only" will only give access to the "Administration" tab as well as the
Rules.
===== Custom roles
You can set a custom set of permissions instead of a pre-defined role.
A permission is composed of a type and a level:
* Type: Indicates what kind of data will be displayed and/or can be set/updated by the user
** "configuration", "rule", "directive", "technique", "node", "group", "administration", "deployment".
* Level: Access level to be granted on the related type
** "read", "write", "edit", "all" (Can read, write, and edit)
Depending on that value(s) you give, the user will have access to different pages and action in Rudder.
Usage example:
* configuration_read -> Will give read access to the configuration (Rule management, Directives and Parameters)
* rule_write, node_read -> Will give read and write access to the Rules and read access to the Nodes
==== Going further
Rudder aims at integrating with your IT system transparently, so it can't force
its own authentication system.
To meet this need, Rudder relies on the modular authentication system Spring
Security that allows to easily integrate with databases or an
enterprise SSO like CAS, OpenID or SPNEGO. The documentation for this
integration is not yet available, but don't hesitate to reach us on this topic.
25_administration/991_monitoring.txt
=== Monitoring
This section will give recommendations for:
* Monitoring Rudder itself (besides standard monitoring)
* Monitoring the state of your configuration management
==== Monitoring Rudder itself
===== Monitoring a Node
The monitoring of a node mainly consists in checking that the Node can speak with
its policy server, and that the agent is run regularly.
You can use the 'rudder agent health' command to check for communication errors.
It will check the agent configuration and look for connection errors in the last
run logs. By default it will output detailed results, but you can start it with
the '-n' option to enable "nrpe" mode (like Nagios plugins, but it can be
used with other monitoring tools as well). In this mode, it will
display a single line result and exit with:
* 0 for a success
* 1 for a warning
* 2 for an error
If you are using nrpe, you can put this line in your 'nrpe.cfg' file:
----
command[check_rudder]=/opt/rudder/bin/rudder agent health -n
----
To get the last run time, you can lookup the modification date of
'/var/rudder/cfengine-community/last_successful_inputs_update'.
===== Monitoring a Server
You can use use regular API calls to check the server is running and has access to its data.
For example, you can issue the following command to get the list of currently defined rules:
----
curl -X GET -H "X-API-Token: yourToken" http://your.rudder.server/rudder/api/latest/rules
----
You can then check the status code (which should be 200). See the <<rest-api, API documentation>> for more information.
You can also check the webapp logs (in '/var/log/rudder/webapp/year_month_day.stderrout.log')
for error messages.
==== Monitoring your configuration management
There are two interesting types of information:
* *Events*: all the changes made by the the agents on your Nodes
* *Compliance*: the current state of your Nodes compared with the expected configuration
===== Monitor compliance
You can use the Rudder API to get the current compliance state of your infrastructure.
It can be used to simply check for configuration errors, or be integrated in
other tools.
Here is an very simple example of API call to check for errors (exits with 1 when there is an error):
----
curl -s -H "X-API-Token: yourToken" -X GET 'https:/your.rudder.server/rudder/api/latest/compliance/rules' | grep -qv '"status": "error"'
----
See the <<rest-api, API documentation>> for more information about general API usage, and the
http://www.rudder-project.org/rudder-api-doc/#api-compliance[compliance API documentation]
for a list of available calls.
===== Monitor events
The Web interface gives access to this, but we will here see how to process events
automatically. They are available on the root server, in '/var/log/rudder/compliance/non-compliant-reports.log'.
This file contains two types of reports about all the nodes managed by this server:
* All the modifications made by the agent
* All the errors that prevented the application of a policy
The lines have the following format:
----
[%DATE%] N: %NODE_UUID% [%NODE_NAME%] S: [%RESULT%] R: %RULE_UUID% [%RULE_NAME%] D: %DIRECTIVE_UUID% [%DIRECTIVE_NAME%] T: %TECHNIQUE_NAME%/%TECHNIQUE_VERSION% C: [%COMPONENT_NAME%] V: [%KEY%] %MESSAGE%
----
In particular, the 'RESULT' field contains the type of event (change or error, respectively 'result_repaired' and 'result_error').
Below is a basic https://www.elastic.co/products/logstash[Logstash] configuration file for parsing Rudder events.
You can then use https://www.elastic.co/products/kibana[Kibana] to explore the data, and create graphs and
dashboards to visualize the changes in your infrastructure.
----
input {
file {
path => "/var/log/rudder/compliance/non-compliant-reports.log"
}
}
filter {
grok {
match => { "message" => "^\[%{DATA:date}\] N: %{DATA:node_uuid} \[%{DATA:node}\] S: \[%{DATA:result}\] R: %{DATA:rule_uuid} \[%{DATA:rule}\] D: %{DATA:directive_uuid} \[%{DATA:directive}\] T: %{DATA:technique}/%{DATA:technique_version} C: \[%{DATA:component}\] V: \[%{DATA:key}\] %{DATA:message}$" }
}
# Replace the space in the date by a "T" to make it parseable by Logstash
mutate {
gsub => [ "date", " ", "T" ]
}
# Parse the event date
date {
match => [ "date" , "ISO8601" ]
}
# Remove the date field
mutate { remove => "date" }
# Remove the key field if it has the "None" value
if [key] == "None" {
mutate { remove => "key" }
}
}
output {
stdout { codec => rubydebug }
}
----
25_administration/992_inventory.txt
=== Use Rudder inventory in other tools
Rudder centralizes the information about your managed systems, and
you can use this information in other tools, mainly through the API.
We well here give a few examples.
==== Export to a spreadsheet
You can export the list of your nodes to a spreadsheet file (xls format) by using a
https://github.com/normation/rudder-tools/tree/master/contrib/rudder_nodes_list[tool] available in the rudder-tools repository.
Simple follow the installation instructions, and run it against your Rudder server.
You will get a file containing:
image::./images/spreadsheet-list-nodes.png[]
You can easily modify the script to add other information.
==== Use the inventory in Rundeck
http://rundeck.org[Rundeck] is a tool that helps automating infrastructures, by
defining jobs that can be run manually or automatically. There is a
http://rundeck.org/plugins/2015/12/02/rudder-nodes.html[plugin] for Rundeck
that allows using Rudder inventory data in Rundeck.
==== Use the inventory in Ansible
There is an https://github.com/ansible/ansible/blob/devel/contrib/inventory/rudder.py[inventory plugin]
for Ansible that makes possible to use Rudder inventory (including groups, nodes,
group ids, node ids, and node properties) as inventory for Ansible, for example
for orchestration tasks on your platform.
26_advanced_node_management/00_intro.txt
== Advanced Node management
26_advanced_node_management/10_node_management.txt
=== Node management
==== Reinitialize policies for a Node
To reinitialize the policies for a Node, delete the local copy of the Applied
Policies fetched from the Rudder Server, and create a new local copy of the
initial promises.
----
rudder agent reset
----
At next run of the Rudder Agent (it runs every five minutes), the initial promises will be used.
[CAUTION]
====
Use this procedure with caution: the Applied Policies of a Node should never get
broken, unless some major change has occurred on the Rudder infrastructure, like
a full reinstallation of the Rudder Server.
====
==== Completely reinitialize a Node
You may want to completely reinitialize a Node to make it seen as a new node
on the server, for example after cloning a VM.
[WARNING]
====
This command will permanently delete your node uuid and keys, and no configuration will
be applied before re-accepting and configuring the node on the server.
====
The command to reinitialize a Node is:
----
rudder agent reinit
----
This command will delete all local agent data, including its uuid and keys, and
also reset the agent internal state. The only configuration kept is the server
hostname or ip configured in +policy_server.dat+. It will also send an inventory
to the server, which will treat it as a new node inventory.
==== Change the agent run schedule
By default, the agent runs on all nodes every 5 minutes. You can modify this value in
*Administration* -> *Settings* -> *Agent Run Schedule*, as well as the "splay time"
across nodes (a random delay that alters scheduled run time, intended to spread
load across nodes).
[WARNING]
====
When reducing notably the run interval length, reporting can be in 'No report' state
until the next run of the agent, which can take up to the previous (longer) interval.
====
26_advanced_node_management/11_node_install.txt
==== Installation of the Rudder Agent
===== Static files
At installation of the Rudder Agent, files and directories are created in
following places:
+/etc+:: Scripts to integrate Rudder Agent in the system (init, cron).
+/opt/rudder/share/initial-promises+:: Initialization promises for the Rudder
Agent. These promises are used until the Node has been validated in Rudder. They
are kept available at this place afterwards.
+/opt/rudder/lib/perl5+:: The FusionInventory Inventory tool and its Perl
dependencies.
+/opt/rudder/bin/run-inventory+:: Wrapper script to launch the inventory.
+/opt/rudder/sbin+:: Binaries for CFEngine Community.
+/var/rudder/cfengine-community+:: This is the working directory for CFEngine
Community.
===== Generated files
At the end of installation, the CFEngine Community working directory is
populated for first use, and unique identifiers for the Node are generated.
+/var/rudder/cfengine-community/bin/+:: CFEngine Community binaries are copied
there.
+/var/rudder/cfengine-community/inputs+:: Contains the actual working CFEngine
Community promises. Initial promises are copied here at installation. After
validation of the Node, Applied Policies, which are the CFEngine promises
generated by Rudder for this particular Node, will be stored here.
+/var/rudder/cfengine-community/ppkeys+:: An unique SSL key generated for the
Node at installation time.
+/opt/rudder/etc/uuid.hive+:: An unique identifier for the Node is generated
into this file.
===== Services
After all of these files are in place, the CFEngine Community daemons are
launched:
include::../glossary/cf-execd.txt[]
include::../glossary/cf-serverd.txt[]
===== Configuration
At this point, you should configure the Rudder Agent to actually enable the
contact with the server. Type in the IP address of the Rudder Root Server in the
following file:
----
echo *root_server_IP_address* > /var/rudder/cfengine-community/policy_server.dat
----
==== Rudder Agent interactive
You can force the Rudder Agent to run from the console and observe what happens.
----
rudder agent run
----
[CAUTION]
.Error: the name of the Rudder Root Server can't be resolved
====
If the Rudder Root Server name is not resolvable, the Rudder Agent will issue
this error:
----
rudder agent run
Unable to lookup hostname (rudder-root) or cfengine service: Name or service not known
----
To fix it, either you set up the agent to use the IP address of the Rudder root
server instead of its Domain name, either you set up accurately the name
resolution of your Rudder Root Server, in your DNS server or in the hosts file.
The Rudder Root Server name is defined in this file
----
echo *IP_of_root_server* > /var/rudder/cfengine-community/policy_server.dat
----
====
[CAUTION]
.Error: the CFEngine service is not responding on the Rudder Root Server
====
If the CFEngine is stopped on the Rudder Root Server you will get this error:
----
# rudder agent run
!! Error connecting to server (timeout)
!!! System error for connect: "Operation now in progress"
!! No server is responding on this port
Unable to establish connection with rudder-root
----
Restart the CFEngine service:
----
service rudder-agent restart
----
====
==== Processing new inventories on the server
===== Verify the inventory has been received by the Rudder Root Server
There is some delay between the time when the first inventory of the Node is
sent, and the time when the Node appears in the New Nodes of the web interface.
For the brave and impatient, you can check if the inventory was sent by listing
incoming Nodes on the server:
----
ls /var/rudder/inventories/incoming/
----
===== Process incoming inventories
On the next run of the CFEngine agent on Rudder Root Server, the new inventory
will be detected and sent to the Inventory Endpoint. The inventory will be then
moved in the directory of received inventories. The Inventory Endpoint do
its job and the new Node appears in the interface.
You can force the execution of CFEngine agent on the console:
----
rudder agent run
----
===== Validate new Nodes
User interaction is required to validate new Nodes.
===== Prepare policies for the Node
Policies are not shared between the Nodes for obvious security and
confidentiality reasons. Each Node has its own set of policies. Policies are
generated for Nodes according in the following states:
. Node is new;
. Inventory has changed;
. Technique has changed;
. Directive has changed;
. Group of Node has changed;
. Rule has changed;
. Regeneration was forced by the user.
["graphviz", "generate_policy_workflow.png"]
.Generate policy workflow
-------
include::../graphviz/generate_policy_workflow.dot[]
------
26_advanced_node_management/15_node_execution_frequency.txt
==== Agent execution frequency on nodes
===== Checking configuration (CFEngine)
Rudder is configured to check and repair configurations using the CFEngine
agent every 5 minutes, at 5 minutes past the hour, 10 minutes past the hour,
etc.
The exact run time on each machine will be delayed by a random interval, in
order to "smooth" the load across your infrastructure (also known as "splay
time"). This reduces simultaneous connections on relay and root servers (both
for the CFEngine server and for sending reports).
Up to and including Rudder 2.10.x, this random interval is between 0 and 1
minutes. As of Rudder 2.10.x and later, this random interval is between 0 and
5 minutes.
===== Inventory (FusionInventory)
The FusionInventory agent collects data about the node it's running on such as
machine type, OS details, hardware, software, networks, running virtual
machines, running processes, environment variables...
This inventory is scheduled once every 24 hours, and will happen in between
0:00 and 5:00 AM. The exact time is randomized across nodes to "smooth" the
load across your infrastructure.
28_advanced_configuration_management/00_intro.txt
== Advanced configuration
28_advanced_configuration_management/30_server_policy_generation.txt
=== Policy generation
Each time a change occurs in the Rudder interface, having an impact on the
CFEngine promises needed by a node, it is necessary to regenerate the modified
promises for every impacted node. By default this process is launched after each
change.
==== +Regenerate now+ button
The button +Regenerate now+ on the top right of the screen permit you to force
the regeneration of the promises. As changes in the inventory of the nodes are
not automatically taken into account by Rudder, this feature can be useful
after some changes impacting the inventory information.
28_advanced_configuration_management/30_technique_creation.txt
=== Technique creation
Rudder provides a set of pre-defined Techniques that cover some basic
configuration and system administration needs. You can also create your own
Techniques, to implement new functionalities or configure new services. This
paragraph will walk you through this process.
There is two ways to configure new Techniques, either thanks to the web
Technique Editor in Rudder or by coding them by hand.
The use of the Technique Editor (code name: http://www.ncf.io/pages/ncf-builder.html[ncf-builder])
is the easiest way to create new Techniques and is fully integrated with Rudder. On the other hand,
it does not allow the same level of complexity and expressiveness than coding a Technique by hand.
Of course, coding new Techniques by hand is a more involved process that needs to learn how the
Technique description language and Technique reporting works.
We advice to always start to try to create new Techniques with the Technique Editor and switch to
the hand-coding creation only if you discover specific needs not addressed that way.
==== Recommended solution: Technique Editor
The easiest way to create your own Techniques is to use the Technique editor,
a web interface to create and manage Techniques based on the ncf framework.
Creating a technique in the Technique Editor will generate a Technique for Rudder automatically.
You can then use that Technique to create a Directive that will be applied on your Nodes thanks
to a Rule.
... This diff was truncated because it exceeds the maximum size that can be displayed.

Also available in: Unified diff