Wednesday, June 5, 2013

OpenShift on AWS EC2, Part 5 - Preparing Configuration Management (with Puppet)

I'm 5 posts into this and still haven't gotten to any OpenShift yet, except for doling out the instances and defining the ports and securitygroups for network communication.  I did say "from the ground up" though, so if you've been here from the beginning, you knew what you were getting into.

In this post I'm going to build and run the tasks needed to turn an EC2 base instance with almost nothing installed into a Puppet master, or a Puppet client.  There are a number of little details that need managing to get puppet to communicate and to make it as easy as possible to manage updates.

First a short recap for people just joining and so I can get my bearings.

Previously, our heros...


In the first post I introduced a set of tools I'd worked up for myself to help me understand and then automate the interactions with AWS.

In the second one I registered a DNS domain and delegated it to the AWS Route53 DNS service.

In the third I figured out what hosts (or classes of hosts) I'd need to run for an OpenShift service.  Then I defined a set of network filter rules (using the AWS EC2 securitygroup feature) to make sure that my hosts and my customers could interact.

Finally in the previous post I selected an AMI to use as the base for my hosts, allocated a static IP address, added DNS A record, and started an instance for the puppet master and broker hosts.  The remaining three (data1, message1, and node1) were left as an exercise for the reader.

So now I have five AWS EC2 instances running.  I can reach them via SSH. The default account ec2-user has sudo ALL permissions. The instances are completely unconfigured.

The next few sections are a bunch of exposition and theory.  It explains some about what I'm doing and why, but doesn't contain a lot of doing. Scan ahead if you get bored to the real stuff closer to the bottom.

The End of EC2


With the completion of the 4th post, we're done with EC2.  All of the interactions from here on occur over SSH.  The only remaining interactions with Amazon will be with Route53.  The broker will be configured to update the app.example.org zone when applications are added or removed.

You could reach this point with any other host provisioning platform, AWS cloudformation, libvirt, virtualbox, Hyper-V, VMWare, or bare metal, it doesn't matter. Each of those will have its own provisioning details but if you can get to networked hosts with stable public domain names you can pick up here and go on, ignoring everything but the first post.

The first post is still needed for the process I'm defining because the origin-setup tools written with Thor aren't just used for EC2 manipulation.  If that's all they were for I would have used one of the existing EC2 CLI packages.

Configuration Management: An Operations Religion


I mean this with every coloring and shade of meaning it can have, complete with schisms and dogma and redemption and truth.

Some small shop system administrators think that configuration management isn't for them, it isn't needed.  I differ with that opinion.  Configuration management systems have two complementary goals.  Only one of them is managing large numbers of systems.  The important goal is managing even one repeatably.  This is the Primary Dogma of System Administration.  If you can't do it 1000 times, you can't do it at all.

The service I'm outlining only requires four hosts (the puppet master will be 5). I could do it on one. That's how most demos until now have done it. I could describe to you how to manually install and tweak each of the components in an OpenShift system, but its very unlikely that anyone would ever be able to reproduce what I described exactly. (I speak from direct experience here, following that kind of description in natural language is hard and writing it is harder)  Using a CMS it is possible to expose what needs to be configured specially and what can be defaulted, and to allow (if its done well) for flexibility and customization.

The religion comes in when you try to decide which one.

I'm going to go with sheep and expedience and choose Puppet.  Other than that I'm not going to explain why.

Brief Principals of Puppet


Puppet is one of the currently popular configuration management systems.  It is widely available and has a large knowledgeable user base. (that's why).

The Master/Agent deployment


The standard installation of puppet contains a puppet master and one or more puppet clients running the puppet agent service. The configuration information is stored on the puppet master host.  The agent processes periodically poll the master for updates to their configuration. When an agent detects a change in the configuration spec the change is applied to the host.

The puppet master scheme has some known scaling issues, but for this scenario it will suit just fine. If the OpenShift service grows beyond what the master/agent model can handle, then there are other ways of managing and distributing the configuration, but they are beyond the scope of this demonstration.

The Site Model Paradigm


That's the only time you'll see me use that word. I promise.

The puppet configuration is really a description of every component, facet and variable you care about in the configuration of your hosts.  It is a model in the sense that it represents the components and their relationships. The model can be compared to reality to find differences. Procedures can be defined to resolve the differences and bring the model and reality into agreement.

There are some things to be aware of. The model is, at any moment, static. It represents the current ideal configuration. The agents are responsible for polling for changes to the model and for generating the comparisons as well as applying any changes to the host. It is certain that when a change is made to the model, there will be a window of time when the site does not match. Usually it doesn't matter, but sometimes changes have to be coordinated. Later I may add MCollective to the configuration to address this. MCollective is Puppet's messaging/remote procedure call service and it allows for more timing control than the standard Puppet agent pull model.

Also, the model is only aware of what you tell it to be aware of.  Anything that you don't specify is.... undetermined. Now specifying everything will bury you and your servers under the weight of just trying to stay in sync. It's important to determine what you really care about and what you don't. It's also important to look carefully at what you're leaving out to be sure that it's safe.

Preparing the Puppet Master and Clients


As usual, there's something you have to do before you can do the thing you really want to do. While puppet can manage pretty much anything about a system after it is set up, it can''t set it self up from nothing.

  • The puppet master must have a well known public hostname (DNS). Check.
  • Each participating client must have a well known public hostname (DNS): Check
  • The master and clients must know its own hostname (for id to the master) Err.
  • The master and clients must have time sync. Ummm
  • The master and clients must have the puppet (master/client) software installed. Not Check.
  • The master must have any additional required modules installed.
  • The master must have a private certificate authority (CA) so that it can sign client credentials. Not yet
  • The clients must generate and submit a client certificate for the master to sign. Nope.
  • The master must have a copy of the site configuration files to generate the configuration model. No.

The first four points are generic host setup, and the first two are complete. Installing the puppet software should be simple, but I may need to check and/or tweak the package repositories to get the version I want. The last four are pure puppet configuration and the last one is the goal line.

Hostname


Puppet uses the hostname value set on each host to identfy the host. Each host should have its hostname set to the FQDN of the IP address on which it expects incoming connections.

Time Sync on Virtual Machines


Time sync needs a little space here.  On an ordinary bare-metal host I'd say "install an ntpd on every host". NTP daemons are light weight and more reliable and stable than something like cron job to re-sync.  Virtual machines are special though.

On a properly configured virtual machine, the system time comes from the VM host. As the guest, you must assume that the host is doing the right thing. The guest VM has a simulated real-time clock (RTC) which is a pass-through either of the host clock or the underlying hardware RTC. In either case, the guest is not allowed to adjust the underlying clock.

Typically a service like ntpd gets time information from outside and not only slews the system (OS) clock but it compares that to the RTC and tries to compensate for drift between the RTC and the "real" time. In the default case it will even adjust the RTC to keep it in line with the system clock and "real" time.

As a guest, it's impolite to go around adjusting your host's clocks.

So a virtual machine system like an IaaS is one of the few places I'd advise against installing a time server.  If your VMs aren't in sync, call your provider and ask them why their hardware clocks are off. If they can't give you a good answer, find a new IaaS provider.

Time Zones and The Cloud


I'm going to throw one more timey-wimey things in here. I set the system timezone on every server host to UTC.  If I ever have to compare logs on servers from different regions of the world (this is the cloud remember?) I don't have to convert time zones.  User accounts can always set their timezone to localtime using the TZ environment variable. The tasks offer an option so that you can override the timezone.

Host Preparation vs. Software Configuration


It would be fairly easy to write a single task that completes all of the bullet points listed above, but something bothers me about that idea. The first 4 are generic host tasks. The last four are distinctly puppet configuration related. Installing the software packages sits on the edge of both. The system tasks are required on every host. Only the puppet master will get the puppet master service software and configuration. The puppet clients will get different software and a different configuration process.

I'm going to take advantage of the design of Thor to create three separate tasks to accomplish the job:

  • origin:prepare - do the common hosty tasks
  • origin:puppetmaster - prepare and then install and configure a master
  • origin:puppetclient - prepare, and then install and register a client

So the origin:prepare task needs to set the hostname on the box to match the FQDN. I prefer also to enable the local firewall service and open a port for SSH to minimize the risk of unexpected exposure. This is also where I'd put a task to add a software repository for the puppet packages if needed.

Each of the origin:puppetmaster and origin:puppetclient tasks will invoke the origin:prepare task first.

File Revision Control


Since Configuration Management is all about control and repeatability it also makes sense to place the configuration files themselves under revision control. For this example I'm going to place the site configuration in a Github repository. Changes can be made in a remote work space and pushed to the repository. ;Then they can be pulled down to the puppet master and the service notified to re-read the configurations. ;They can also be reverted as needed.

When the Puppet site configuration is created on the puppet master, it will be cloned from the git repo on github.

Initialize the Puppet Master

The puppet master process runs as a service on the puppet server. It listens for polling queries from puppet agents on remote machines. The puppet master service must read the site configurations to build the models that will define each host. The puppet service runs as a non-root user and group, each named "puppet". The default location for puppet configuration files is in /etc/puppet.  This area is only writable by the root user. Other service files reside in /var/lib/puppet. This area is writable by the puppet user and group. Further, SELinux limits access by the puppet user to files outside these spaces.

On RHEL6, the EC2 login user is still root. The user and group settings aren't really needed there, but they are still consistent.

The way I choose to manage this is:
  1. Add the ec2-user to the puppet group
  2. Place the site configuration in /var/lib/puppet/site
  3. Update the puppet configuration file (/etc/puppet/puppet.conf) to reflect the change
  4. Clone the configuration repo into the local configuration directory
  5. Symlink the configuration repo root into the ec2-user home directory.
This way the ec2-user has permission and access to update the site configuration.

Puppet uses x509 server and client certificates. The puppet master needs a server certificate and needs to self-sign it before it can sign client certificates or accept connections from clients.

Once the server certificate is generated and signed, I also need to enable and start the puppet master service.  Finally, I need to add a firewall rule allowing inbound connections on the puppet master port, 8140/TCP.

So the process of initializing the puppet master is this:

  • install the puppet master software
  • modify the puppet config file to reflect the new site configuration file location
  • install additional puppet modules
  • generate server certificate and sign it
  • add ec2-user to puppet group (or root user on RHEL6)
  • create site configuration directory and set owner, group, permissions
  • clone the git repository into the configuration directory
  • start and enable the puppet master service

Installing Packages


Since I'm using Thor, the package installation process is a Thor task. Each sub-task will only run once within the invocation of its parent. The origin:puppetmaster task calls the origin:prepare task and provides a set of packages needed for a puppet master in addition to any installed as part of the standard preparation (firewall management and augeas). For the puppet master, these additional packages are the puppet-master and git packages. Dependencies are resolved by YUM.

Adding user to Puppet group


The puppet service is controlled by the root user, but runs as a role user and group both called puppet. I would like the login user to be able to manage the puppet site configuration files, but not to log in either as the root or puppet user. I'll add the ec2-user user to the puppet group, and set the group write permissions so that this user can manage the site configuration.

Creating the Site Configuration Space


As noted above, the ec2-user account will be used to manage the puppet site configuration files. The files must be writable by the ec2-user (through the puppet group) but they must also be readable by the puppet user and service. In addition, since these are service configurations rather than (local) host configuration files, I'd prefer that they not reside in /etc.

SELinux policy restricts the location of files which the puppet service processes can read. One of those locations is in /var/lib/puppet. Rather than update the policy, it seems easier to place the site configuration data within /var/lib/puppet.

I create a new directory /var/lib/puppet/site and set the owner, group and permissions so that the puppet user/group and read and write the files. I also set the permissions so that new files will inherit the group and group permissions. This way the ec2-user will have the needed access, and SELinux will not prevent the puppet master service from reading the files. In a later step I'll use git to clone the site configuration files into place.

Install Service Configuration File (setting variables)


Moving the location of the site configuration files from the default (/etc/puppet/manifests) and adding a location for user defined modules requires updating the default configuration file. Currently I make three alterations to the default file:


  • set the puppet master hostname as needed
  • set the location of the site configuration (manifests)
  • add a location to the modulepath
I use a template file, push a copy to the master and use sed to replace the values before copying the updated file into place.

Installing Standard Modules


Puppet provides a set of standard modules for managing common aspects of clients.  These are installed from a web site on PuppetLabs with the puppet module install command. these are installed before starting the master process.

Unpacking Site Configuration (From git)


I already have a task for cloning a git repository on a remote host.  Unpack the site configurations into the directory prepared previously.  The git repo must have two directories at the top: manifests and modules. These will contain the site configuration and any custom modules needed for OpenShift. These locations are configured into the puppet master configuration above.


Adding Firewall Rules


The puppet master service listens on port 8140/TCP. I need to add an allow rule so that inbound connections to the puppet master will succeed.

Just to be safe I also add an explicit rule to allow SSH (22/TCP) before restarting the firewall service.

These match the securitygroup rule definitions defined in the third post. Some people would question the need for running a host based firewall when EC2 provides network filtering I would refer anyone who asks that to read up on Defense in Depth.

Filtering the Puppet logs into a separate file


It is much easier to observe the operation of the service if the logs are in a separate file. I add an entry to the /etc/rsyslog.d/ directory and restart the rsyslog daemon to place puppet master logs in /var/log/puppet-master.log


Enabling and Starting the Puppet Master Service


Finally, when all of the puppet master host customization is complete, I can enable and start the puppet master service.

What all that looks like


That's a whole long list and I created a whole set of Thor tasks to manage the steps.  Then I created an uber-task to execute it all.  It starts with the result of origin:baseinstance (run with the securitygroups default and puppetmaster). It results in a running puppet master waiting for clients to connect.

thor origin:puppetmaster puppet.infra.example.org --siterepo https://github.com/markllama/origin-puppet
origin:puppetmaster puppet.infra.example.org
task: remote:available puppet.infra.example.org
task: origin:prepare puppet.infra.example.org
task: remote:distribution puppet.infra.example.org
fedora 18
task: remote:arch puppet.infra.example.org
x86_64
task: remote:timezone puppet.infra.example.org UTC
task: remote:hostname puppet.infra.example.org
task: remote:yum:install puppet.infra.example.org puppet-server git system-config-firewall-base augeas
task: puppet:master:join_group puppet.infra.example.org
task: remote:git:clone puppet.infra.example.org https://github.com/markllama/origin-puppet
task: puppet:master:configure puppet.infra.example.org
task: puppet:master:enable_logging puppet.infra.example.org
task: puppet:module:install puppet.infra.example.org puppetlabs-ntp
task: remote:firewall:stop puppet.infra.example.org
task: remote:firewall:service puppet.infra.example.org ssh
task: remote:firewall:port puppet.infra.example.org 8140
task: remote:firewall:start puppet.infra.example.org
task: remote:service:start puppet.infra.example.org puppetmaster
task: remote:service:enable puppet.infra.example.org puppetmaster

You can check that the puppet master has created and signed its own CA certificate by listing the puppet certificates like this:

thor puppet:cert list puppet.infra.example.org --all
task puppet:cert:list puppet.infra.example.org
+ puppet.infra.example.org BD:27:A5:3B:AE:F5:1D:05:7E:8F:E7:E9:CA:BA:32:4B

This indicates that there is now a single certifiicate associated with the puppet master.  This certificate will be used to sign the client certificates as they are submitted.

Initializing a Puppet Client


The first part of creating a puppet client host is the same as for the master (almost).  It involves installing some basic puppet packages (puppet, facter, augeas), setting the hostname and time zone and the rest of the hosty stuff.  Then we get to the puppet client registration.

The puppet agent runs on the controlled client hosts. It polls the puppet master periodically checking for updates to the configuration model for the host.

When the puppet agent starts the first time it generates an x509 client certificate and sends a signing request to the puppet master.

When the puppet master receives an unsigned certificate from an agent for the first time it places it in a list of certificates waiting to be signed. The user can then sign and accept each new client certificate and the initial identification process is complete. From then on the puppet agent polls using its client certificate for identification and the signature provides authentication.

The process then for installing and initializing the puppet client is this:

On the client:
  • install the puppet agent package
  • configure the puppet master hostname into the configuration file
  • enable the puppet agent service
  • start the puppet agent service
Then on the puppet master:
  • wait for the client unsigned certificate to arrive
  • sign the new client certificate
This is what it looks like for the broker host:


thor origin:puppetclient broker.infra.example.org puppet.infra.example.org
origin:puppetclient broker.infra.example.org, puppet.infra.example.org
task: remote:available broker.infra.example.org
task: origin:prepare broker.infra.example.org
task: remote:distribution broker.infra.example.org
fedora 18
task: remote:arch broker.infra.example.org
x86_64
task: remote:timezone broker.infra.example.org UTC
task: remote:hostname broker.infra.example.org
task: remote:yum:install broker.infra.example.org puppet facter system-config-firewall-base augeas
task: puppet:agent:set_server broker.infra.example.org puppet.infra.example.org
task: puppet:agent:enable_logging broker.infra.example.org
task: remote:service:enable broker.infra.example.org puppet
task: remote:service:start broker.infra.example.org puppet
task: puppet:cert:sign puppet.infra.example.org broker.infra.example.org

At this point the client can request its own configuration model and the master will confirm the identity of the client and return the requested information.

thor puppet:cert:list puppet.infra.example.org --all
task puppet:cert:list puppet.infra.example.org
+ broker.infra.example.org 09:97:22:B9:A9:16:AE:B1:32:93:EC:3A:6D:7A:CF:67
+ puppet.infra.example.org 70:B8:E0:C0:F8:5B:48:67:4E:92:91:D2:0D:E4:2B:F4

Repeat the origin:puppetclient step for the data1, message1 and node1 instances you created last time. You did create them, right?  Check the certs as each one registers.

The next step is to actually build a model for the client to request by creating a site manifest and a set of node descriptions.

That means: we finally get to do some OpenShift.

4 comments:

  1. Are you continuing the series? It's just getting interesting.

    ReplyDelete
  2. I mean too, but I got stuck. I'm learning puppet and some of it's not as easy as I thought (to do well). working on getting a good set of puppet configs (which happen to be different from some others that have been presented because I want to keep the back-end service *installation* separate from the *configuration*)

    ReplyDelete
  3. Ok. I'm trying to continue on my own, but the going is rough.
    Is there a document that describes how to use Route 53 from Origin? I'm sort of stuck there.

    Regards, Frank

    ReplyDelete
    Replies
    1. 1) You need to have a domain which you own: see any domain registrar.
      2) you need to create a matching domain in AWS
      http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/R53Example.html

      You'll end up with your AWS credentials and a zone ID

      3) you need to install the rubygem-openshift-origin-dns-route53 on your broker host

      4) On your new broker host, copy /etc/openshift/plugins.d/openshift-origin-dns-route53.conf.example to /etc/openshift/plugins.d/openshift-origin-dns-route53.conf

      5) edit the new .conf file and set the AWS_ACCESS_KEY_ID, AWS_SECRET_KEY and AWS_HOSTED_ZONE_ID

      Also make sure to set the cloud_domain in the /etc/openshift/broker.conf and node.conf

      - Mark



      Delete