Friday, December 13, 2013

OpenShift Service Development: Building a Build Box

I found this week that I needed to have build box so that I could repeatedly run the dev/build/install/test cycle. I've messed around with it on and off since i started working on OpenShift but I looked back and realized that I've never written a procedure for creating the build box. So here it is.

The build process takes source code from a git repository and transforms it into packages. Finally it places the packages into an install repository so that they will be available to the target hosts via yum. The yum repository is published by a small web server. It doesn't need to be fancy as it's just flat files.

The instructions here are for Fedora 18 or 19. There are some special considerations for RHEL6 or CentOS6. These are detailed in a section at the bottom of this post. There are notes inline for when the process is different for RHEL.

There are also some considerations for creating a build box in AWS EC2 (or any managed hosting service).

Install the build/publish software


On a minimal install of Fedora, install the base packages needed for the build service. Git to retrieve the source code, tito to build the RPMs, and thttpd to serve the YUM repository to the install targets.

sudo yum install git tito thttpd firewalld

(On RHEL6, enable EPEL repository and skip firewalld)

Create the YUM repository root directory


Next, create a location for the YUM repository. Place it in a space where thttpd will find it and make it writable by the build user (assumed to be the current user)


sudo mkdir /var/www/thttpd/tito
sudo chown $(id --name --user):$(id --name --group) /var/www/thttpd/tito

Enable Web Services


I have to publish the packages to the install hosts once they're built. I need a web server and on Fedora, I need the firewall daemon running and configured to allow HTTP communications.

sudo systemctl enable thttpd
sudo systemctl start thttpd

sudo systemctl enable firewalld
sudo systemctl start firewalld

# Open the port for now
sudo firewall-cmd --zone public --add-service http

# Make the change persistent across reboots
sudo firewall-cmd --zone public --add-service http --permanent

Configure Tito Output Location


Tito places the build results and RPMs in /tmp/tito by default. I can set the target location using the titorc file.


echo "RPMBUILD_BASEDIR=/var/www/thttpd/tito/" > $HOME/.titorc

Retrieve the Source Code Repository


Now that the publication and build services are prepared, it's time to actually get the software source code.

git clone https://github.com/openshift/origin-server.git

If you are doing development, substitute your own fork and branch. If you are doing your editing on the build box (not really recommended, but slightly time saving) you can also use the git: (ssh) protocol and add your github user SSH key so that you can both pull and push changes.

Install Package Build Requirements


Before you can build packages, you must also install any build requirements for the packages. The bourne shell code snippet below will walk the source code tree, find each package root and install all of the build requirements it finds using yum-builddep.

This triggers off the presence of a .spec file in the root of a package tree. It's critical as a package developer to note all build requirements in the .spec file.


for SPECPATH in $(find origin-server -name \*.spec)
do
    PKGDIR=$(dirname $SPECPATH )
    SPECFILE=$(basename $SPECPATH)
    (cd $PKGDIR ; sudo yum-builddep -y $SPECFILE )
done

Build All Packages

Now that all the build requirements are installed, it's time to build the software.

The bourne shell snippet below will walk the entire source tree and locate the root of each package tree and run tito to build the package. If you are building test packages, uncomment the TEST assignment line.

# TEST=--test
# SCL=--scl=ruby193 # for RHEL6
for SPECPATH in $(find origin-server -name \*.spec)
do
  PKGDIR=$(dirname $SPECPATH )
  SPECFILE=$(basename $SPECPATH)
  (cd $PKGDIR ; tito build --rpm $TEST $SCL)
done
createrepo /var/www/thttpd/tito

This snippet walks the source tree and runs tito to build each package.  The last line rebuilds the YUM repository metadata from the packages present.

You can build a single package by moving to the root of the package tree and running tito manually. You'll also have to re-run createrepo each time you update a package.  If you've rebuilt a package but yum claims there's no update available check that.

Which reminds me, if you're using yum for frequent updates (more than once a day), you'll also have to clear the metadata on the client machine so that it sees the updated packages


Building Test Packages


Tito builds not from the most recent commit. That is it ignores files in the workspace which have not been committed. It also requires at least one initial tito tag to operate.

Tito builds test packages by creating a temporary commit and tag. This allows it to create a package with a unique name for each test build. Each time you make a change and rebuild, a serial number is auto-incremented so that yum will see the new package as an 'update' and accept it in preference to any currently installed version.

Configuring a yum repo on the install host


You can supersede the stock Fedora or RHEL OpenShift package repositories by placing a new repo file in /etc/yum.repos.d

/etc/yum.repos.d/openshift_buildtest.repo
[openshift_buildtest]
name=OpenShift Build/Test repository
baseurl=http://build.example.com/tito/
enabled=1
gpgcheck=0

Considerations for RHEL6



There are two significant differences between Fedora and RHEL6 when creating a build box.

Firewall and Services on RHEL6


On RHEL6 systemd and firewalld are not available. Use iptables and lokkit instead of firewalld and firewall-cmd to open the TCP port for HTTP. Use service and chkconfig instead of systemctl to control services.


# Open Firewall for HTTP
sudo lokkit --service=http

# start and enable thttpd
chkconfig thttpd on
service thttpd start

RHEL6, OpenShift, Ruby/Rails versions and Software Collections


RHEL6 is .. special. OpenShift is written in Ruby 1.9.3 and Rails 3. These didn't exist or weren't stable when RHEL6 was created. Different Ruby versions don't play nicely on a single system (there have been at least 3 attempts I can find to get Ruby 1.8 and 1.9 to co-exists like Python 2 and 3. All have thrown up their hands in frustration). Given that, heroic measures were required to get them to run on RHEL6. Those heroic measures are called Software Collections. better known as "SCL".

What SCL does is provide a means to repackage software and run it in a special environment that isolates it from the rest of the system. The SCL team has re-packaged over 500 packages to run in the ruby193 environment on RHEL6. These are all needed to run OpenShift on RHEL6. They're also needed to build OpenShift for RHEL6.

Fortunately, the SCL and OpenShift teams have kindly provided a YUM repository for them. When you add the OpenShift dependencies repository to your YUM repo configurations all of the build dependencies will resolve. They've also added a switch to tito so that it will run your builds inside the SCL environment.
/etc/yum.repos.d/openshift-dependencies.repo
[openshift-dependencies]
name=OpenShift Dependencies
baseurl=http://mirror.openshift.com/pub/openshift-origin/nightly/rhel-6/dependen
cies/$basearch/
enabled=1
gpgcheck=0

There are other things in those dependencies repositories. There are a large number of update packages which OpenShift needs but which have not yet appeared upstream. On Fedora you'll still need the dependencies YUM repository to create the runtime hosts but you don't need them to build the OpenShift packages.

Considerations for EC2 hosting


If you're placing your build box in AWS EC2 there are a couple of additional things to consider:

  1. EC2 security_policy must allow HTTP (port 80/TCP)
    Your build instance must be created with a security policy which allows port 80/TCP for your install hosts.
  2. Internal Hostname
    EC2 hosts have an internal and external hostname. Both names are dynamic (unless you assign ElasticIP). If your install hosts are also on EC2 you can use the internal hostname and IP address for the security_policy.
  3. External Hostname
    If your install boxes are not hosted in EC2 then you must allow all hosts on port 80 TCP and note the EC2 public hostname so that the install hosts can access the build host web server.

OpenShift Source Code Repositories


  • origin-server -
    http://github.com/openshift/origin-server
  • rhc -
    http://githhub.com/openshift/rhc
  • origin-dependencies (SRPM repository)
    http://mirror.openshift.com/pub/openshift-origin/nightly/fedora-latest/dependencies/SRPMS/

OpenShift Dependencies RPM Repositories


  • Fedora -
    http://mirror.openshift.com/pub/openshift-origin/nightly/fedora-latest/dependencies/$basearch
  • RHEL6 -
    http://mirror.openshift.com/pub/openshift-origin/nightly/rhel-6/dependencies/$basearch

References

  • git - http://git-scm.com/
  • github - https://github.com
  • thttpd - http://www.acme.com/software/thttpd/
  • firewalld - https://fedoraproject.org/wiki/FirewallD
  • tito - http://linux.die.net/man/8/tito
  • Software Collections (SCL) - https://fedorahosted.org/SoftwareCollections/

Friday, October 11, 2013

Diversion: Kerberos (FreeIPA) in AWS EC2

One of the things many people are asking for in OpenShift is alternate ways of authenticating SSH and git interactions with the applications gears. Since I'm doing my development work in EC2, I thought that was surely the right place to try it out. Well as usual, it didn't work out quite as simple as I'd planned.

This post isn't about OpenShift directly.  It addresses what I found when I tried to implement FreeIPA in EC2 so that I could develop code to allow Kerberos authentication in OpenShift.

Kerberos in way too few words


Kerberos is an authentication protocol and service defined originally at MIT as part of Project Athena (along with things like the X Windows System and Zephyr, a predecessor to modern IM services). It is meant to provide authenticated on unencrypted and even untrusted networks. Perfect right? Well Kerberos has some quirks.

First, different people can run their own Kerberos services. To avoid conflicts, each service is given an identifier string known as a realm. By convention the realm string is the same as the enterprise DNS domain name. That is, if a company has DNS domain example.com then the Kerberos realm would be EXAMPLE.COM. Unlike DNS domain names, Kerberos realms are case sensitive.

Each participating host must be registered with the Kerberos server and each user must be added to the user list on the server as well. Hosts and users (and any other manageable entity) is identified with a principal. This is basically a name which is unique for each resource, err user, umm host.... thing. The important thing is that the host is identified by a string which is derived from its hostname.

Now wars have been fought over whether a hostname should be the Fully Qualified Domain Name (FQDN) or just the host portion. For Kerberos there is only one answer: FQDN.

The host principal for a given host is composed of the hostname, and the realm. When a client tries to log in, it needs to know the correct principal to request from the server. This is why the FQDN must be the hostname. When the user attempts to log in he must provide both his own principal and the host principal for the destination. The only way to know the destination's host principal is if it is related to the hostname as viewed from the client host.

This is where life gets interesting in AWS EC2.

You see, AWS uses RFC 1918 and something like Network Address Translation to create a private network for the virtual machines which make up the EC2 service. AWS also uses an internal DNS service to identify each virtual machine. This means that from the view of a host inside the private network, the destination host has a different IP address and a different hostname than when viewed from outside the private network. The upshot is that, to use Kerberos with EC2 I need some way to make sure that the user can determine a valid host principal to request regardless of where the user is located.

A Word about IPA and AWS


IPA (and FreeIPA) is not a single service.  It's a collection of services configured so that they work in concert to provide secure user and host access over untrusted networks.  Kerberos is only one of the services, though it is probably the core one.  LDAP, NTP and DNS are all support services which make the operation of Kerberos work.  IPA wraps these services in such a way so that mere mortals don't necessarily need to know how the bindings work merely to get the service running.

In this post I'm dealing almost entirely with the Kerberos service within IPA and I'll refer to that component by name.  Where I mention FreeIPA it will be in reference to the specific tools that FreeIPA provides to set up and manage the conglomerate service.

AWS (Amazon Web Services) is also a suite of services.  The core of that is the EC2 virtual host service. Again, all of the AWS services generally work together, but I'm only dealing with EC2 instances in this post so I'll refer to EC2 specifically unless I'm referring to the full suite.

UPDATE: 2013-11-07 - AWS TOS do not permit open DNS recursion.

One other thing to be aware of when running IPA in AWS.  Amazon terms of service do not allow users to create open recursive DNS services within AWS on the grounds that they can be abused.

When setting up your AWS security policies and the named service on your IPA hosts, be sure to disable recursion and/or limit access to appropriate IP ranges for your DNS clients or you'll get a polite nastygram from Amazon.

Kerberos, Linux and SSH


I want to use Kerberos with SSH so that I can avoid using SSH authorized_keys when pushing git updates to my applications on OpenShift. (mostly ignoring EC2 for now). To do that I need several things set up:

  • A Kerberos (FreeIPA) server - IPA installed, configured
  • A set of users configured into the FreeIPA service
  • A target host (OpenShift node)
For SSH the most important things are that the Kerberos and LDAP configurations are set up properly. This includes configuring sssd, and the /etc/nsswitch.conf settings. Luckly the FreeIPA ipa-client-install script (with the right inputs) will do all of that for me. I think there are ways to get it to tell me precisely what changes it's making but I haven't learned how yet. I do know that I can find the results in the /var/log/ipaclient-install.log.

The other thing I need to do is to make sure that the SSH client and server both will at least try to use the GSSAPI protocol for managing the authentication process. On the server this means making sure that the GSSAPIAuthentication is enabled.

On the client side, I may need to specify that I want to use the gssapi-with-mic authentication method. I may also need to specify the host principal to use to access the destination (as distinct from the hostname from the client's vantage point). More on these later.

EC2 , cloud-init and resisting dynamic naming


The network interface numbering and naming in EC2 are dynamic by design, both on the internal and external interfaces. EC2 does offer "elastic IP" which is really "static IP" for an instance and since I own a DNS zone I can assign a name to the address. Unfortunately this only offers control of the external IP address assigned to an instance. I have to find ways to manage the internal naming myself.



When a host registers with a Kerberos service it generally uses its own hostname as the identifier for the host principal. If this is the same as the DNS name associated with one or more if its IP addresses, this is just by convention. That is, Kerberos doesn't maintain the mapping. So if a host changes its hostname but no changes are made to the Kerberos database, the host can no longer identify itself by it's principal. Also, if the name by which it is known from the outside changes (because the IP address and/or DNS name changed) then clients will no longer know what principal to use to request an access ticket.

There are two factors here: Making sure the host knows its own name, and making sure that users coming from remote hosts can determine the (a?) valid principal (based on the hostname) to request a ticket for.

Maintaining Host Identity


For Kerberos, the hostname is the anchor for a host principal. If the hostname changes on a registered host, it will no longer be able to properly communicate with the Kerberos server and clients. Luckly the Fedora and RHEL images in EC2 use cloud-init to initialize potentially dynamic information on startup.

Cloud-init is software which, when installed on a host, can take input from the cloud environment and customize the host to integrate it into the environment. It can do things like.. oh, say, set the IP address of network interfaces and hostnames, install SSH host keys, set device mount points and the like. It will also allow me to tell it not to update the hostname on each reboot.

The main configuration for cloud-init is /etc/cloud/cloud.cfg. I just need to add a line containing 'preserve_hostname: 1' and set the hostname I want in /etc/hostname. From then on, restarts or reboots will keep the hostname I set. Given that value I have my anchor for registering the host with the kerberos server and maintaining the host/principal mapping.

The host now always knows its own name: part one solved.

The view from Inside/Outside


You do learn something every day. In talking with some of the FreeIPA developer folks I learned something I hadn't known about how the Kerberos protocol works. ;Here's the important bit.

When client wants to gain access to some resource, it sends a message to the kerberos server saying "I am this principal and I want access to that one over there, ok?" The Kerberos server sends back a signed/encrypted ticket with both names (principals) wrapped inside it. The client then sends the ticket in an authentication request to the destination host, who verifies "yep, that's me, and I can see that that's you, let me check are you allowed?" and if the answer is "yes" the client request is granted.

What this means is that the client must know the name (principal) of the destination resource before attempting to connect to the resource. It must know a name that both the kerberos server and the resource host itself will recognize. When everyone uses DNS FQDNs to identify hosts and they have the same view of DNS, this works nicely. Accessing private network resources from a public network creates some issues.

Most tools, SSH included, assume that they can compose a host principal from the hostname given by the user. So if a client was using realm EXAMPLE.COM and tried to reach a remote host with FQDN 'destination.example.com' the principal would be host/destination.example.com@EXAMPLE.COM. But since the EC2 hosts have (not one but two) random hostnames assigned when they boot, it's impossible to know from the hostname alone what the principal of the destination is.

If I happen to know the mapping (ie, what principal is associated with the destination host) then SSH allows me to specify that with -oGSSAPIServerIdentity=<principal> on the CLI or in a Host entry in my .ssh/config file. From the illustration above, to properly authenticate with the Kerberos Host I could do this:

ssh -oPreferredAuthentications=gssapi-with-mic -oGSSAPIServerIdentity=host1.example.com random2.external

(this also assumes that my local hostname and remote one are the same and that I've got a ticket-granting-ticket for the EXAMPLE.COM realm using kinit.)

What this says is to log into a host who's name (from this view) is random2.internal, and who's principal is host1.example.com. With that the local client can send a query to the Kerberos server and get the right ticket back to hand to the destination host. It can say "yep, that's me and yep you're you, and yep you're allowed"

The Many Faces of Kerberos


It's totally coincidence that Cerberos is the 3-headed dog that guards the landing in Hades on the river Styx and I'm going to add two "faces" to my kerberos clients. Totally.

I think that in the discussion above I've been careful to make it clear that a Kerberos principal is an identifier. That is, it is a handle which is used to refer to an object in the Kerberos database which corresponds to an object in reality. I have nick names. Hosts in Kerberos can have them too, and this is going to solve my identity problem with random dynamic names and IP addresses.

I've managed to give each host a fixed hostname, thanks to cloud-init. Once I know the dynamic names both public and internal I should be able to inform the Kerberos server of both of the aliases.

If this works, here's what will happen when I try to log in either from a host inside or outside the private network, my SSH client will form a principal from the (DNS) name I offer. My client will send that to the Kerberos server and request an access ticket to the remote host using the alias principal. And the Kerberos server will know which host that means. It will create an access ticket which will grant me access to the destination host, which will examine it and on finding everything in order, will allow my SSH connection.

It turns out that FreeIPA doesn't yet have a nice Web or CLI user interface to add principals to a registered host record, but the Kerberos database is stored in an LDAP server on the Kerberos master host. For now I (or a friend actually) can craft an LDAP query which will add the principals I need to the host record. This is assumed to be run on

kerberos# ldapmodify -h localhost -x -D "cn=Directory Manager" -W @lt;@lt;EOF
dn: fqdn=host1.example.com,cn=computers,cn=accounts,dc=example,dc=com
changetype: modify
add: krbprincipalname
krbprincipalname: host/random2.external@EXAMPLE.COM
krbprincipalname: host/random2.internal@EXAMPLE.COM
EOF

The invocation above will request the password of the admin user for the FreeIPA LDAP service. I'm sure there's a way to do it with Kerberos/GSSAPI, but I haven't got it yet.

What that change does is add two Kerberos principal names to the host entry for host1.example.com. The principal names match what an SSH client would construct using the DNS name (internal or external) to reach the target host. Now when the Kerberos server gets a ticket request from clients either inside or outside the private network, the principal in the ticket request will be associated with a known host.

The Devil's in the Dynamics


This is all fine so long as host1.example.com doesn't reboot. When it does, AWS will assign it a new internal and external IP address and new DNS names. It would be really nice if the host, when it boots could inform the Kerberos service what its new internal and external principal names are.

I don't currently know how to do this, but I suspect that I could add a module to cloud-init to do the job. The client is already configured to use the LDAP service on the Kerberos (FreeIPA) server. Once the server knows that all three principals refer to the same host life should be good.

Now to learn some cloud-init finagling and enough Kerberos so that I can have the host update itself on reboot.


What does this mean for OpenShift?


If you want to run an OpenShift service in AWS and you want to offer Kerberos authentication for SSH/git to the application gears, you'll have to do a little LDAP tweaking of the Kerberos principals associated with each host so that the Kerberos service will know which host you mean regardless of your view of the destination host.

The first round of Kerberos integration code is going into OpenShift Origin as I write this (the pull requst is submitted and getting commentary).  By the next release it should be possible to manage developer access to gears with Kerberos and FreeIPA.  Additional use cases will be added over time.

Summary


  • Cloud services like AWS and corporate networks often rely on private network spaces and Network Address Translation to manage dynamic hosts.
  • Cloud Init usually updates the hostname on each boot but this can be suppressed.
  • For a client trying to reach a host for SSH this poses a problem because the view of the destination from the client differs based on where the client sits in relation to the network boundary.
  • Kerberos can assign multiple principals to a single host, which allows authentication to work.

References

  • FreeIPA - A component based single-sign-on service 
  • Kerberos - The authentication component of FreeIPA and MIT Project Athena
  • GSSAPI - A standardized generic authentication and access control protocol
  • Project Athena - 1980s MIT/DEC/IBM project to design network services and protocols
  • RFC 1918 - Private non-routable IP address space reservations
  • Network Address Translation - Private network boundary system
  • AWS Elastic IP - AWS static IP addresses for dynamic hosts
  • Cloud Init - A service for customizing host configuration on reboot

Friday, September 27, 2013

Broker-Node interaction and visibility - Debugging "missing" cartridges on a node.

In the previous post I set up the end-point messaging for OpenShift. (Broker -> Messaging -> Node). I showed a simple use of the MCollective mco command and where the MCollective log files are. The last step was to send an echo message to the OpenShift agent on an OpenShift node and get the response back.

Now I have my OpenShift broker and node set up (I think) but something's not right and I have to figure out what.

DISCLAIMER: this post isn't a "how to" it's a mostly-stream-of-consciousness log of my attempt to answer a question and understand what's going on underneath.  It's messy.  It may cast light on some of the moving parts.  It may also lead me to a confrontation with The Old Man From Scene 24 and we all know how that ends. You have been warned.

In the paragraphs below I include a number of CLI invocations and their responses.  I include a prompt at the beginning of each one to indicate where (on which host) the CLI command is running.

  • broker$ - the command is running on my OpenShift broker host
  • node$ - the command is running on my OpenShift node host
  • dev$ - the command is running on my laptop

I've also got a copy of the origin-server source code checked out from the repository on Github.

I've got my rhc client already configured for my test user (cleverly named 'testuser') and my broker (using the libra_server variable). See ~/.openshift/express.conf if needed.

What's going on here?

I started trying to access the Broker with the rhc CLI command to create a user, register a namespace and then create an application. I'd like to create a python app and I've installed the openshift-origin-cartridge-python package to provide that app framework. But when I try to create my app I'm told that Python is not available:


client$ rhc create-app testapp1 python
Short Name Full name
========== =========

There are no cartridges that match 'python'.

So I figure I'll ask what cartridges ARE available:

client$ rhc cartridges

Note: Web cartridges can only be added to new applications.

Now, when I look for cartridge packages on the node I get a different answer:

node$ rpm -qa | grep cartridge
openshift-origin-cartridge-abstract-1.5.9-1.fc19.noarch
openshift-origin-cartridge-cron-1.15.2-1.git.0.aa68436.fc19.noarch
openshift-origin-cartridge-php-1.15.2-1.git.0.090a445.fc19.noarch
openshift-origin-cartridge-python-1.15.1-1.git.0.0eb3e95.fc19.noarch

Somehow, when the broker is asking the node to list its cartridges, the node isn't answering correctly. Why?

I'm going to see if I can observe the broker making the query to list the nodes and then see if I can determine where the node is (or isn't) getting its answer.

Refresher: MCollective RPC and OpenShift


MCollective is really an RPC (Remote Procedure Call) mechanism. It defines the interface for a set of functions to be called on the remote machine. The client submits a function call which is sent to the server. The server executes the function on behalf of the client and then returns the result.

The OpenShift client adds one more level of indirection and I want to get that out of the way. I can look at the logs on the broker and node to see what activity was caused when the rhc command issued the cartridge list query.

The broker writes its logs into several files in /var/log/openshift/broker. You can see the REST queries arrive and resolve in the Rails log file /var/log/openshift/broker/production.log.

broker$ sudo tail /var/log/openshift/broker/production.log
...
2013-09-26 17:54:06.445 [INFO ] Started GET "/broker/rest/api" for 127.0.0.1 at 2013-09-26 17:54:06 +0000 (pid:16730)
2013-09-26 17:54:06.447 [INFO ] Processing by ApiController#show as JSON (pid:16730)
2013-09-26 17:54:06.453 [INFO ] Completed 200 OK in 6ms (Views: 3.6ms) (pid:16730)
2013-09-26 17:54:06.469 [INFO ] Started GET "/broker/rest/api" for 127.0.0.1 at 2013-09-26 17:54:06 +0000 (pid:16730)
2013-09-26 17:54:06.470 [INFO ] Processing by ApiController#show as JSON (pid:16730)
2013-09-26 17:54:06.476 [INFO ] Completed 200 OK in 6ms (Views: 3.8ms) (pid:16730)
2013-09-26 17:54:06.504 [INFO ] Started GET "/broker/rest/cartridges" for 127.0.0.1 at 2013-09-26 17:54:06 +0000 (pid:16730)
2013-09-26 17:54:06.507 [INFO ] Processing by CartridgesController#index as JSON (pid:16730)
2013-09-26 17:54:06.509 [INFO ] Completed 200 OK in 1ms (Views: 0.4ms) (pid:16730)

From that I can see that my rhc calls are arriving and apparently the response is being returned OK.

The default settings for the MCollective client (on the OpenShift broker) don't go to a log file. I can check the OpenShift node though to see what's happened there and if it has received a query for the list of installed cartridges.

node$ sudo grep cartridge /var/log/mcollective.log | tail -3
I, [2013-09-26T17:10:23.696825 #9827]  INFO -- : openshift.rb:1217:in `cartridge_repository_action' action: cartridge_repository_action, agent=openshift, data={:action=>"list", :process_results=>true}
I, [2013-09-26T17:29:10.768487 #9827]  INFO -- : openshift.rb:1217:in `cartridge_repository_action' action: cartridge_repository_action, agent=openshift, data={:action=>"list", :process_results=>true}
I, [2013-09-26T17:29:24.957806 #9827]  INFO -- : openshift.rb:1217:in `cartridge_repository_action' action: cartridge_repository_action, agent=openshift, data={:action=>"list", :process_results=>true}

This too looks like the message has been received and processed properly and returned.

Hand Crafting An mco Message


Here's where that MCollective RPC interface definition comes in. I can look at that to see how to generate the cartridge list query using mco so that I can observe both ends and track down what's happening.

There are really two things to look for here:

  1. What message is sent (and how do I duplicate it)?
  2. What action does the agent take when it receives the message?

For part one, MCollective defines the RPC interfaces in a file with a .ddl extension.  Looking for one of those in the openshift-server Github repository finds me this: origin-server/msg-common/agent/openshift.ddl 

Of particular interest are lines 390-397. These define the cartridge_repository action and the set of operations it can perform: install, list, erase


Taking that, I can craft an mco rpc message to duplicate what the broker is doing when it queries the nodes:

mco rpc openshift cartridge_repository action=list
Discovering hosts using the mc method for 2 second(s) .... 1

 * [ ==========================================================> ] 1 / 1


ec2-54-211-74-85.compute-1.amazonaws.com 
   output:


Finished processing 1 / 1 hosts in 32.15 ms

Yep, it still says "none".  When I go back and look at the logs, it shows the same query I was looking at, so I think I got that right.

But What Does It DO?


Now that I can send the query message, I need to find out what happens on the other end to generate the response.  My search begins in the node messaging plugin for MCollective, particularly in the agent module code (in plugins/msg-node/mcollective/src/openshift.rb).  This defines a function cartridge_repository_action which... doesn't actually do the work, but points me to the next piece of code which actually does implent the function.

It appears that the OpenShift node implements a class ::OpenShift::Runtime::CartridgeRepository which is a factory for an object that actually produces the answer. A quick look in the source repository shows me the file that defines the CartridgeRepository class.

dev$ cd ~/origin-server
dev$ find . -name \*.rb | xargs grep 'class CartridgeRepository' 
./node/test/functional/cartridge_repository_func_test.rb:  class CartridgeRepositoryFunctionalTest < NodeTestCase
./node/test/functional/cartridge_repository_web_func_test.rb:class CartridgeRepositoryWebFunctionalTest < OpenShift::NodeTestCase
./node/test/unit/cartridge_repository_test.rb:class CartridgeRepositoryTest < OpenShift::NodeTestCase
./node/lib/openshift-origin-node/model/cartridge_repository.rb:    class CartridgeRepository


So, on the node, when a query is received for the list of cartridges that is present, the MCollective agent for OpenShift creates one of the CartridgeRepository objects and then asks it for the list.

A quick look at the cartridge_repository.rb file on Github is enlightening. First, the file has 60 lines of excellent commentary before the code starts. Line 86 indicates that the CartridgeRepository object will look for cartridges in /var/lib/openshift/.cartridge_repository (while noting that this location should be configurable in the /etc/openshift/node.conf someday). And lines 170-189 define the install method which seems to populate the cartridge_repository from some directory which is provided as an argument.

But when does CartridgeRepository.install get invoked? Well, since CartridgeRespository is a factory and a Singleton (which provides the instance() method for initialization) I can look for where it's instantiated:

dev$ find . -type f | xargs grep -l OpenShift::Runtime::CartridgeRepository.instance | grep -v /test/
./plugins/msg-node/mcollective/src/openshift.rb
./node-util/bin/oo-admin-cartridge
./node/lib/openshift-origin-node/model/upgrade.rb

Note that I remove all of the files in the test directories using grep -v /test/. What remains are the working files which actually instantiate a CartridgeRepository object.  If I also check for a call to the install() method, the list is reduced to one file:

find . -type f | xargs grep  OpenShift::Runtime::CartridgeRepository.instance | grep -v /test/  | grep install
./plugins/msg-node/mcollective/src/openshift.rb:              ::OpenShift::Runtime::CartridgeRepository.instance.install(path)

So, it looks like the node messaging module is what populates the OpenShift cartridge repository.  When I looked earlier though, it didn't seem to have done that.  Messaging is running and I've installed cartridge RPMs and I can successfully query for (what turns out to be) an empty database of cartridge information.

Finally! When I look at plugins/msg-node/mcollective/src/openshift.rb lines 26-45 I find what I'm looking for. CartridgeRepository.install is called when the MCollective openshift agent is loaded. That is: when the MCollective service starts.

It turns out that I'd started and began testing the MCollective service before installing any of the OpenShift cartridge packages. Restarting MCollective populates the .cartridge_repository directory and now my mco rpc queries indicate the cartridges I've installed.

Verifying the Change

So, I think, based on the code I've found, that when I restart the mcollective daemon on my OpenShift node, it will look in /usr/libexec/openshift/cartridges and it will use the contents to populate /var/lib/openshift/.cartridge_repository (not sure why that's hidden, but..).

node$ ls /var/lib/openshift/.cartridge_repository
redhat-cron  redhat-php  redhat-python

DING!

Now when I query with mco, I should see those.  And I do:

broker$ mco rpc openshift cartridge_repository action=list
Discovering hosts using the mc method for 2 second(s) .... 1

 * [ ==========================================================> ] 1 / 1


ec2-54-211-74-85.compute-1.amazonaws.com 
   output: (redhat, php, 5.5, 0.0.5)
           (redhat, python, 2.7, 0.0.5)
           (redhat, python, 3.3, 0.0.5)
           (redhat, cron, 1.4, 0.0.6)



Finished processing 1 / 1 hosts in 29.41 ms

I suspect that the OpenShift broker also caches these values, so I might have to restart the openshift-broker service on the broker host as well.  Then I can use rhc in my development environment to see what cartridges I can use to create an application.

dev$ rhc cartridges
php-5.5    PHP 5.5    web
python-2.7 Python 2.7 web
python-3.3 Python 3.3 web
cron-1.4   Cron 1.4   addon

Note: Web cartridges can only be added to new applications.

And when I try to create a new application:

$ rhc app-create testapp1 python-2.7

Application Options
-------------------
  Namespace:  testns1
  Cartridges: python-2.7
  Gear Size:  default
  Scaling:    no

Creating application 'testapp1' ... done


Waiting for your DNS name to be available ...


Well, that's better than before!

What I learned:

  • rhc is configured using ~/.openshift/express.conf
  • look at the logs
    • OpenShift broker: /var/log/openshift/broker/production.log
    • mcollective server: /var/log/mcollective.log
  • the mcollective client mco can be used to simulate broker activity
    • mco plugin doc - list all plugins available
    • mco plugin doc openshift - list OpenShift RPC actions and parameters
    • mco rpc openshift cartridge_repository action=list
      query all nodes for their cartridge repository contents
  • source code is useful
    • OpenShift source repository: https://github.com/openshift/origin-server
    • judicious use of find and grep can narrow problem searches
  • cartridge RPMs are installed in /usr/libexec/openshift/cartridges
  • cartridges are "installed" in /var/lib/openshift/.cartridge_repository
  • adding cartridges to a node requires a restart for the mcollective service

Monday, September 23, 2013

OpenShift Support Services: Messaging Part 2 (MCollective)

About a year ago I did a series of posts on verifying the plugin operations for OpenShift Origin support services. I showed how to check the datastore (mongodb) and DNS updates and how to set up an ActiveMQ message broker , but I when I got to actually sending and receiving messages I got stuck.

The Datastore and DNS services use a single point-to-point connection between the broker and the update server. The messaging services use an intermediate message broker (ActiveMQ, not to be confused with the OpenShift broker). This means that I need to configure and check not just one points, but three:

  • Mcollective client to (message) broker (on OpenShift broker)
  • Mcollective server to (message) broker (on OpenShift node)
  • End to End

I'm using the ActiveMQ message broker to carry MCollective RPC messages. The message broker is interchangeable. MCollective can be carried over any one of several messaging protocols. I'm using the Stomp protocol for now, though MCollective is deprecating Stomp in favor of a native ActiveMQ (AMQP?) messaging protocol.

OpenShift Messaging Components
OpenShift Messaging Components


In a previous post I set up an ActiveMQ message broker to be used for communication between the OpenShift broker and nodes. In this one I'm going to connect the OpenShift components to the messaging service, verify both connections and then verify that I can send messages end-to-end.

Hold on for the ride, it's a long one (even for me)

Mea Culpa: I'm referring to what MCollective does as "messaging" but that's not strictly true. ActiveMQ, RabbitMQ, QPID are message broker services. MCollective uses those, but actually, MCollective is an RPC (Remote Procedure Call) system. Proper messaging is capable of much more than MCollective requires, but to avoid a lot of verbal knitting I'm being lazy and calling MCollective "messaging".

The Plan

Since this is a longer process than any of my previous posts, I'm going to give a little road-map up front so you know you're not getting lost on the way. Here are the landmarks between here and a working OpenShift messaging system:
  1. Ingredients: Gather configuration information for messaging setup.
  2. Mcollective Client -
    Establish communications between the Mcollective client  and the ActiveMQ server
    (OpenShift broker host to message broker host)
  3. MCollective Server -
    Establish communications beween the MCollective server and the ActiveMQ server
    (OpenShift node host to message broker host)
  4. MCollective End-To-End -
    Verify MCollective communication from client to server
  5. OpenShift Messaging and Agent -
    Install OpenShift messaging interface definition and agent packages on both OpenShift broker and node

Ingredients

VariableValue
ActiveMQ Servermsg1.infra.example.com
Message Bus
topic usernamemcollective
topic passwordmarionette
admin passwordmsgadminsecret
Message End
passwordmcsecret
  • A running ActiveMQ service
  • A host to be the MCollective client (and after that an OpenShift broker)
  • A host to run the MCollective service (and after that an OpenShfit node)
On the Mcollective client host, install these RPMs
  • mcollective-client
  • rubygem-openshift-origin-msg-broker-mcollective
On the MCollective server (OpenShift node) host, install these RPMs
  • mcollective
  • openshift-origin-msg-node-mcollective

Secrets and more Secrets


As with all secure network services, messaging requires authentication. Messaging has a twist though. You need two sets of authentication information, because, underneath, you're actually using two services. When you send a message to an end-point, the end point has to be assured that you are someone who is allowed to send messages. Like with a letter, having some secret code or signature so that you can be sure the letter isn't forged.

Now imagine a special private mail system. Before the mail carrier will accept a letter, you have to give them the secret handshake so that they know you're allowed to send letters. On the delivery end, the mail carrier requires not just a signature but a password before handing over the letter.

That's how authentication works for messaging systems.

When I set up the ActiveMQ service I didn't create a separate user for writing to the queue (sending a letter) and for reading (receiving) but I probably should have. As it is, getting a message from the OpenShift broker to an OpenShift node through MCollective and ActiveMQ requires two passwords and one username.

  • mcollective endpoint secret
  • ActiveMQ username
  • ActiveMQ password

The ActiveMQ values will have to match those I set on the ActiveMQ message broker in the previous post. The MCollective end point secret is only placed in the MCollective configuration files. You'll see those soon.

MCollective Client (OpenShift Broker)


The OpenShift broker service sends messages to the OpenShift nodes. All of the messages (currently) originate at the broker. This means that the nodes need to have a process running which connects to the message broker and registers to receive MColletive messages.

Client configuration: client.cfg


The MCollective client is (predictably) configured using the /etc/mcollective/client.cfg file. For the purpose of connecting to the message broker, only the connector plugin values are interesting, and for end-to-end communications I need the securityprovider plugin as well. The values related to logging are useful debugging too.


# Basic stuff
topicprefix     = /topic/
main_collective = mcollective
collectives     = mcollective
libdir          = /usr/libexec/mcollective
loglevel        = log   # just for testing, normally 'info'

# Plugins
securityprovider = psk
plugin.psk       = mcsecret

# Middleware
connector         = stomp
plugin.stomp.host = msg1.infra.example.com
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.user = marionette

NOTE:if you're running on RHEL6 or CentOS 6 instead of Fedora you're going to be using the SCL version of Ruby and hence MCollective. The file is then at the SCL location:

/opt/rh/ruby193/root/etc/mcollective/client.cfg

Now I can test connections to the ActiveMQ message broker, though without any servers connected, it won't be very exciting (I hope).

Testing client connections


MCollective provides a command line tool for sending messages: mco . mco is capable of several other 'meta' operations as well. The one I'm interested in first is 'mco ping'. With mco ping I can verify the connection to the ActiveMQ service (via the Stomp protocol) .

The default configuration file is owned by root and is not readable by ordinary users. This is because it contains plain-text passwords (There are ways to avoid this, but that's for another time). This means I have to either run mco commands as root, or create a config file that is readable. I'm going to use sudo to run my commands as root.

The mco ping command connects to the messaging service and asks all available MCollective servers to respond. Since I haven't connected any yet, I won't get any answers, but I can at least see that I'm able to connect to the message broker, send queries. If all goes well I should get a nice message saying "no one answered".


sudo mco ping


---- ping statistics ----
No responses received

If that's what you got, feel free to skip down to the MCollective Server section.

Debugging client-side configuration errors


There are a couple of obvious possible errors:
  1. Incorrect broker host
  2. broker service not answering
  3. Incorrect messaging username/password
The first two will appear the same to the MCollective client. Check the simple stuff first. If I'm sure that the host is correct then I'll have to diagnose the problem on the other (and write another blog post). Here's how that looks:

sudo mco ping
connect to localhost failed: Connection refused - connect(2) will retry(#0) in 5
connect to localhost failed: Connection refused - connect(2) will retry(#1) in 5
connect to localhost failed: Connection refused - connect(2) will retry(#2) in 5
^C
The ping application failed to run, use -v for full error details: Could not connect to Stomp Server: 

Note the message Could not connect to the Stomp Server.

If you get this message, check these on the OpenShift broker host:

  1. The plugin.stomp.host value is correct
  2. The plugin.stomp.port value is correct
  3. The host value resolves to an IP address in DNS
  4. The ActiveMQ host can be reached from the OpenShift Broker host (by ping or SSH)
  5. You can connect to Stomp port on the ActiveMQ broker host
    telnet msg1.example.com 61613 (yes, telnet is a useful tool) 

If all of these are correct, then look on the ActiveMQ message broker host:

  1. The ActiveMQ service is running
  2. The Stomp transport TCP ports match the plugin.stomp.port value
  3. The host firewall is allowing inbound connections on the Stomp port

The third possibility indicates an information or configuration mismatch between the MCollective client configuration and the ActiveMQ server.  That will look like this:

sudo mco ping
transmit to msg1.infra.example.com failed: Broken pipe
connection.receive returning EOF as nil - resetting connection.
connect to localhost failed: Broken pipe will retry(#0) in 5

The ping application failed to run, use -v for full error details: Stomp::Error::NoCurrentConnection

You can get even more gory details by changing the client.cfg to set the log level to debug and send the log output to the console:

...
loglevel = debug # instead of 'log' or 'info'
logger_type = console # instead of 'file', or 'syslog' or unset (no logging)
...

I'll spare you what that looks like here.

MCollective Server (OpenShift Node)


The mcollective server is a process that connects to a message broker, subscribes to (registers to receive messages from) one or more topics and then listens for incoming messages. When it accepts a message, the mcollective server passes it to a plugin module for execution and then returns any response.  All OpenShift node hosts run an MCollective server which connects to one or more of the ActiveMQ message brokers.

Configure the MCollective service daemon: server.cfg 


I bet you have already guessed that the MCollective server configuration file is /etc/mcollective/server.cfg

# Basic stuff
topicprefix     = /topic/
main_collective = mcollective
collectives     = mcollective
libdir          = /usr/libexec/mcollective
logfile         = /var/log/mcollective.log
loglevel        = debug # just for setup, normally 'info'
daemonize       = 1
classesfile     = /var/lib/puppet/state/classes.txt

# Plugins
securityprovider = psk
plugin.psk       = mcsecret

# Registration
registerinterval = 300
registration     = Meta

# Middleware
connector         = stomp
plugin.stomp.host = msg1.infra.example.com
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = marionette


# NRPE
plugin.nrpe.conf_dir  = /etc/nrpe.d

# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

NOTE: again the mcollective config files will be in /opt/rh/ruby193/root/etc/mcollective/ if you are running on RHEL or CentOS.

The server configuration looks pretty similar to the client.cfg. The securityprovider plugin must have the same values, because that's how the server knows that it can accept a message from the clients. The plugin.stomp.* values are the same as well, allowing the MCollective server to connect to the ActiveMQ service on the message broker host. It's really a good idea for the logfile value to be set so that you can observe the incoming messages and their responses. The loglevel is set to debug to start so that I can see all the details of the connection process. Finally the daemonize value is set to 1 so that the mcollectived will run as a service.

The mcollectived will complain if the YAML file does not exist or if the Meta registration plugin is not installed and selected. Comment those out for now. They're out of scope for this post.

Running the MCollective service


When you're satisfied with the configuration, start the mcollective service and verify that it is running:


sudo service mcollective start
Redirecting to /bin/systemctl start  mcollective.service
ps -ef | grep mcollective
root     13897     1  5 19:37 ?        00:00:00 /usr/bin/ruby-mri /usr/sbin/mcollectived --config=/etc/mcollective/server.cfg --pidfile=/var/run/mcollective.pid

You should be able to confirm the connection to the ActiveMQ server in the log.

sudo tail /var/log/mcollective.log 
I, [2013-09-19T19:53:21.317197 #16544]  INFO -- : mcollectived:31:in `
' The Marionette Collective 2.2.3 started logging at info level I, [2013-09-19T19:53:21.349798 #16551] INFO -- : stomp.rb:124:in `initialize' MCollective 2.2.x will be the last to fully support the 'stomp' connector, please migrate to the 'activemq' or 'rabbitmq' connector I, [2013-09-19T19:53:21.357215 #16551] INFO -- : stomp.rb:82:in `on_connecting' Connection attempt 0 to stomp://mcollective@msg1.infra.example.com:61613 I, [2013-09-19T19:53:21.418225 #16551] INFO -- : stomp.rb:87:in `on_connected' Conncted to stomp://mcollective@msg1.infra.example.com:61613 ...

If you see that, you can skip down again to the next section, MCollective End-to-End

Debugging MCollective Server Connection Errors


Again the two most likely problems are that the host or the stomp plugin are mis-configured.


sudo tail /var/log/mcollective.log
I, [2013-09-19T20:05:50.943144 #18600]  INFO -- : stomp.rb:82:in `on_connecting' Connection attempt 1 to stomp://mcollective@msg1.infra.example.com:61613
I, [2013-09-19T20:05:50.944172 #18600]  INFO -- : stomp.rb:97:in `on_connectfail' Connection to stomp://mcollective@msg1.infra.example.com:61613 failed on attempt 1
I, [2013-09-19T20:05:51.264456 #18600]  INFO -- : stomp.rb:82:in `on_connecting' Connection attempt 2 to stomp://mcollective@msg1.infra.example.com:61613
...

If I see this, I need to check the same things I would have for the client connection. On the MCollective server host:

  • plugin.stomp.host is correct
  • plugin.stomp.port matches Stomp transport TCP port on the ActiveMQ service
  • Hostname resolves to an IP address
  • ActiveMQ host can be reached from the MCollective client host (ping or SSH)

On the ActiveMQ message broker:

  • ActiveMQ service is running
  • Any firewall rules allow inbound connections to the Stomp TCP port
The other likely error is username/password mismatch. If you see this in your mcollective logs, check the ActiveMQ user configuration and compare it to your mcollective server plugin.stomp.user and plugin.stomp.password values.

...
I, [2013-09-19T20:15:13.655366 #20240]  INFO -- : stomp.rb:82:in `on_connecting'
 Connection attempt 0 to stomp://mcollective@msg1.infra.example.com:61613
I, [2013-09-19T20:15:13.700844 #20240]  INFO -- : stomp.rb:87:in `on_connected' 
Conncted to stomp://mcollective@msg1.infra.example.com:61613
E, [2013-09-19T20:15:13.729497 #20240] ERROR -- : stomp.rb:102:in `on_miscerr' U
nexpected error on connection stomp://mcollective@msg1.infra.example.com:61613: es_trans: transmit
 to msg1.infra.example.com failed: Broken pipe
...

MCollective End-to-End

Now that I have both the MCollective client and server configured to connect to the ActiveMQ message broker I can confirm the connection end to end. Remember that 'mco ping' command I used earlier? When there are connected servers, they should answer the ping request.

 sudo mco ping
node1.infra.example.com time=138.60 ms


---- ping statistics ----
1 replies max: 138.60 min: 138.60 avg: 138.60 

OpenShift Node 'plugin' agent

Now I'm sure that both MCollective and ActiveMQ are working end-to-end between the OpenShift broker and node. But there's no "OpenShift" in there yet.  I'm going to add that now.

There are three packages that specifically deal with MCollective and interaction with OpenShift:

  • openshift-origin-msg-common.noarch (misnamed, specifically mcollective)
  • rubygem-openshift-origin-msg-broker-mcollective
  • openshift-origin-msg-node-mcollective.noarch

The first package defines the messaging protocol for OpenShift.  It includes interface specifications for all of the messages, their arguments and expected outputs.  This is used on both the MCollective client and server side to produce and validate the OpenShift messages. The broker package defines the interface that the OpenShift broker (a Rails application) uses to generate messages to the nodes and process the returns. The node package defines how the node will respond when it receives each message.

The OpenShift node also requires several plugins that, while not required for messaging per-se, will cause the OpenShift agent to fail if they are not present
  • rubygem-openshift-origin-frontend-nodejs-websocket
  • rubygem-openshift-origin-frontend-apache-mod-rewrite
  • rubygem-openshift-origin-container-selinux
When these packages are installed on the OpenShift broker and node, mco will have a new set of messages available. MCollective calls added sets of messages... (OVERLOAD!) 'plugins'.  So, to see the available message plugins, use mco plugin doc.  To see the messages in the openshift plugin, use mco plugin doc openshift.

Mcollective client: mco


I've used mco previously just to send a ping message from a client to the servers.  This just collects a list of the MCollective servers listening. The mco command can also send complete messages to remote agents.  Now I need to learn how to determine what agents and messages are available and how to send them a message.  Specifically, the OpenShift agent has an echo message which simply returns a string which was sent in the message.  Now that all of the required OpenShift messaging components are installed, I should be able to tickle the OpenShift agent on the node from the broker.  This is what it looks like when it works properly:

sudo mco rpc openshift echo msg=foo
Discovering hosts using the mc method for 2 second(s) .... 1

 * [ ========================================================> ] 1 / 1


node1.infra.example.com 
   Message: foo
      Time: nil



Finished processing 1 / 1 hosts in 25.49 ms

As you might expect, this has more than its fair share of interesting failure modes.  The most likely thing you'll see from the mco command is this:

sudo mco rpc openshift echo msg=foo
Discovering hosts using the mc method for 2 second(s) .... 0

No request sent, we did not discover any nodes.


This isn't very informative, but it does at least indicate that the message was sent and nothing answered. Now I have to look at the MCollective server logs to see what happened. After setting the loglevel to 'debug' in /etc/mcollective/server.cfg, restarting the mcollective service and re-trying the mco rpc command, I can find this in the log file:


sudo grep openshift /var/log/mcollective.log 
D, [2013-09-20T14:18:05.864489 #31618] DEBUG -- : agents.rb:104:in `block in findagentfile' Found openshift at /usr/libexec/mcollective/mcollective/agent/openshift.rb
D, [2013-09-20T14:18:05.864637 #31618] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Openshift from mcollective/agent/openshift.rb
E, [2013-09-20T14:18:06.360415 #31618] ERROR -- : pluginmanager.rb:171:in `rescue in loadclass' Failed to load MCollective::Agent::Openshift: error loading openshift-origin-container-selinux: cannot load such file -- openshift-origin-container-selinux
E, [2013-09-20T14:18:06.360633 #31618] ERROR -- : agents.rb:71:in `rescue in loadagent' Loading agent openshift failed: error loading openshift-origin-container-selinux: cannot load such file -- openshift-origin-container-selinux
D, [2013-09-20T14:18:13.741055 #31618] DEBUG -- : base.rb:120:in `block (2 levels) in validate_filter?' Failing based on agent openshift
D, [2013-09-20T14:18:13.741175 #31618] DEBUG -- : base.rb:120:in `block (2 levels) in validate_filter?' Failing based on agent openshift

It turns out that the reason those three additional packages are requires is that they provide facters to MCollective. Facter is a tool which gathers a raft of information about a system and makes it quickly available to MCollective. The rubygem-openshift-origin-node package adds some facter code, but those facters will fail if the additional packages aren't present. If you do the "install everything" these resolve automatically, but if you install and test things piecemeal as I am they show up as missing requirements.

After I add those packages I can send an echo message and get a successful reply.  If you can discover the MCollective servers from the client with mco ping, but can't get a response to an mco rpc openshift echo message, then the most likely problem is that the OpenShift node packages are missing or misconfigured. Check the logs and address what you find.

Finally! (sort of)

At this point, I'm confident that the Stomp and MCollective services are working and that the OpenShift agent is installed on the node and will at least respond to the echo message.  I was going to also include testing through the Rails console, but this has gone on long enough.  That's next.

References

Thursday, July 25, 2013

Installing OpenShift using Puppet, Part 1: Divide and Conquer

It's been quite a while since I posted last.  I got stuck on three things

  1. I didn't (don't?) know Puppet
  2. The layers of service and configuration were (are?) muddy.
  3. There are several competing significant installation use cases to be considered.
It would be very Agile to just leap in and start coding things until I got a set of boxes that worked.  But it would also likely lead to something which was difficult to adapt to new uses because it didn't respect the working boundaries between different layers and compartments which make up the OpenShift service.

So I  learned Puppet, and started coding some top down samples and some bottom up samples, while at the same time writing philosophical tracts trying to justify the direction(s) I was going.

I'm not nearly done (having thrown out several attempts and restarted each time) but I think I've reached a point where I can express clearly *how* I want to go about developing a CMS reference implementation for OpenShift installation and configuration.

OK, you're not going to get away without some philosophy.  Rather a lot actually this time.

Where do Configuration Management Services (CMS) fit...

Up until now I've concentrated on reaching a point where I can start installing OpenShift.  And I'm finally there.  No.  Wait.  I'm at the point where I can start installing the parts that make up OpenShift.  After that I have to configure each of the parts to run in their own way and then I have to configure the settings that OpenShift cares about.

See what happened there? It's layers.

Host and Service Configuration Management Layers


See where the CMS fits in? Between the running OS and all those configured hosts/services.  That's where I am now.

Look at the top layer.  Those vertical slices are individual hosts or services that have to be created. Only the ones in the middle are OpenShift. The others are operations support (for a running service) or development and testing stuff which isn't really OpenShift but is needed to create OpenShift.

... and what do they need to do.


I need to show you another complicated looking picture:

Draft OpenShift CMS Module Layout

As you can see, I need to learn Inkscape more, because Dia graphics just don't look as cool.

I'm a fan of big complicated looking graphics to help describe big complicated concepts. This is a very rough incomplete draft of a module breakdown for installing OpenShift using a CM system (Puppet, by name, though this should be applicable to any modular CM system). The three columns in the diagram represent different class uses.

The first column contains classes that are just used to hold information that will be used to instantiate other classes on the target hosts.  None of these classes will be instantiated directly on any host.  The second column shows an OpenShift Broker and an OpenShift Node.  Each includes a class which describes the function of that host within the OpenShift service.  Each also includes any support services which run on the same host.  The third column contains the definitions of the hosts which run support services.  They include a module for the support service itself, and then one which applies the OpenShift customizations to the service.

OpenShift uses plugin modules for several support services.  In the diagram, the plugins for each support service are grouped together. Only one would be instantiated for a given OpenShift installation.  Which one is selected as a parameter of the Master Configuration class ::openshift

There is one lonely class at the bottom of the middle column:  ::openshift::host.  This is currently a catch-all class which provides a single point of control for configuring common host settings such as SSH firewall rules, the presence (or absence) of special YUM repositories and the like.  It will be instantiated on every host which participates in the OpenShift service (for now) but can be customized using class parameters. This class could be broken up or other features added depending on how (in)coherent it becomes.

I showed you that diagram to show you this one.


Now if you look back to the top diagram, in the top row there are a bunch of vertical items that are peers of a sort.  Each blob represents a component service of OpenShift or a supporting service or task.  In a fully distributed service configuration each one would represent an individual host.

Keep that in mind as you look at the middle and right side of the second diagram.  Those (UML/Puppet) nodes there map to the blobs ad the top of the first diagram.  They show the internal structure of those blobs when installing OpenShift and support components.  Each one contains at least one module which installs a support service or component and which doesn't have the word openshift in it.  Each one also contains (at least) one OpenShift customization class.  This latter uses the information classes from the first column to customize the software on the node and integrate it with the OpenShift service.

This is the key point:

There are layers here too.

The configuration management tools should be designed so so that you can plug them together in a way that gets you the service you want to have, building up from the base to the completed service.  But: you should also be able to understand how the service is put together by looking at the configuration files.

By creating each (Puppet) node from the (Puppet) parts that define what a host does, you can see what the host does by looking at the Puppet node definition.  Knowledge is maintained both ways.

Outside-In Development


Since I'm still learning specific CMS implementations (Puppet now, and Ansible soon) and trying to understand how best to express a configuration for OpenShift using these CMS, I'm working from the top alot. At the same time, I'm trying to actually implement (or steal implementations of) modules to do things like set up the YUM repositories and install the packages.   I like this kind of Outside-In development model because (if I'm careful not to thrash too much) it helps me keep both perspectives in mind and hopefully meet in the middle.

In the next installment I'll try putting some meat on the bones of this skeleton: Actually creating the empty class definitions in their hierarchical structure and then creating a set of node definitions that import and use the classes to at least pretend to install an OpenShift service.  Hopefully it won't take me another couple of months.

References

CMS Software

Drawing Software




Wednesday, June 5, 2013

OpenShift on AWS EC2, Part 5 - Preparing Configuration Management (with Puppet)

I'm 5 posts into this and still haven't gotten to any OpenShift yet, except for doling out the instances and defining the ports and securitygroups for network communication.  I did say "from the ground up" though, so if you've been here from the beginning, you knew what you were getting into.

In this post I'm going to build and run the tasks needed to turn an EC2 base instance with almost nothing installed into a Puppet master, or a Puppet client.  There are a number of little details that need managing to get puppet to communicate and to make it as easy as possible to manage updates.

First a short recap for people just joining and so I can get my bearings.

Previously, our heros...


In the first post I introduced a set of tools I'd worked up for myself to help me understand and then automate the interactions with AWS.

In the second one I registered a DNS domain and delegated it to the AWS Route53 DNS service.

In the third I figured out what hosts (or classes of hosts) I'd need to run for an OpenShift service.  Then I defined a set of network filter rules (using the AWS EC2 securitygroup feature) to make sure that my hosts and my customers could interact.

Finally in the previous post I selected an AMI to use as the base for my hosts, allocated a static IP address, added DNS A record, and started an instance for the puppet master and broker hosts.  The remaining three (data1, message1, and node1) were left as an exercise for the reader.

So now I have five AWS EC2 instances running.  I can reach them via SSH. The default account ec2-user has sudo ALL permissions. The instances are completely unconfigured.

The next few sections are a bunch of exposition and theory.  It explains some about what I'm doing and why, but doesn't contain a lot of doing. Scan ahead if you get bored to the real stuff closer to the bottom.

The End of EC2


With the completion of the 4th post, we're done with EC2.  All of the interactions from here on occur over SSH.  The only remaining interactions with Amazon will be with Route53.  The broker will be configured to update the app.example.org zone when applications are added or removed.

You could reach this point with any other host provisioning platform, AWS cloudformation, libvirt, virtualbox, Hyper-V, VMWare, or bare metal, it doesn't matter. Each of those will have its own provisioning details but if you can get to networked hosts with stable public domain names you can pick up here and go on, ignoring everything but the first post.

The first post is still needed for the process I'm defining because the origin-setup tools written with Thor aren't just used for EC2 manipulation.  If that's all they were for I would have used one of the existing EC2 CLI packages.

Configuration Management: An Operations Religion


I mean this with every coloring and shade of meaning it can have, complete with schisms and dogma and redemption and truth.

Some small shop system administrators think that configuration management isn't for them, it isn't needed.  I differ with that opinion.  Configuration management systems have two complementary goals.  Only one of them is managing large numbers of systems.  The important goal is managing even one repeatably.  This is the Primary Dogma of System Administration.  If you can't do it 1000 times, you can't do it at all.

The service I'm outlining only requires four hosts (the puppet master will be 5). I could do it on one. That's how most demos until now have done it. I could describe to you how to manually install and tweak each of the components in an OpenShift system, but its very unlikely that anyone would ever be able to reproduce what I described exactly. (I speak from direct experience here, following that kind of description in natural language is hard and writing it is harder)  Using a CMS it is possible to expose what needs to be configured specially and what can be defaulted, and to allow (if its done well) for flexibility and customization.

The religion comes in when you try to decide which one.

I'm going to go with sheep and expedience and choose Puppet.  Other than that I'm not going to explain why.

Brief Principals of Puppet


Puppet is one of the currently popular configuration management systems.  It is widely available and has a large knowledgeable user base. (that's why).

The Master/Agent deployment


The standard installation of puppet contains a puppet master and one or more puppet clients running the puppet agent service. The configuration information is stored on the puppet master host.  The agent processes periodically poll the master for updates to their configuration. When an agent detects a change in the configuration spec the change is applied to the host.

The puppet master scheme has some known scaling issues, but for this scenario it will suit just fine. If the OpenShift service grows beyond what the master/agent model can handle, then there are other ways of managing and distributing the configuration, but they are beyond the scope of this demonstration.

The Site Model Paradigm


That's the only time you'll see me use that word. I promise.

The puppet configuration is really a description of every component, facet and variable you care about in the configuration of your hosts.  It is a model in the sense that it represents the components and their relationships. The model can be compared to reality to find differences. Procedures can be defined to resolve the differences and bring the model and reality into agreement.

There are some things to be aware of. The model is, at any moment, static. It represents the current ideal configuration. The agents are responsible for polling for changes to the model and for generating the comparisons as well as applying any changes to the host. It is certain that when a change is made to the model, there will be a window of time when the site does not match. Usually it doesn't matter, but sometimes changes have to be coordinated. Later I may add MCollective to the configuration to address this. MCollective is Puppet's messaging/remote procedure call service and it allows for more timing control than the standard Puppet agent pull model.

Also, the model is only aware of what you tell it to be aware of.  Anything that you don't specify is.... undetermined. Now specifying everything will bury you and your servers under the weight of just trying to stay in sync. It's important to determine what you really care about and what you don't. It's also important to look carefully at what you're leaving out to be sure that it's safe.

Preparing the Puppet Master and Clients


As usual, there's something you have to do before you can do the thing you really want to do. While puppet can manage pretty much anything about a system after it is set up, it can''t set it self up from nothing.

  • The puppet master must have a well known public hostname (DNS). Check.
  • Each participating client must have a well known public hostname (DNS): Check
  • The master and clients must know its own hostname (for id to the master) Err.
  • The master and clients must have time sync. Ummm
  • The master and clients must have the puppet (master/client) software installed. Not Check.
  • The master must have any additional required modules installed.
  • The master must have a private certificate authority (CA) so that it can sign client credentials. Not yet
  • The clients must generate and submit a client certificate for the master to sign. Nope.
  • The master must have a copy of the site configuration files to generate the configuration model. No.

The first four points are generic host setup, and the first two are complete. Installing the puppet software should be simple, but I may need to check and/or tweak the package repositories to get the version I want. The last four are pure puppet configuration and the last one is the goal line.

Hostname


Puppet uses the hostname value set on each host to identfy the host. Each host should have its hostname set to the FQDN of the IP address on which it expects incoming connections.

Time Sync on Virtual Machines


Time sync needs a little space here.  On an ordinary bare-metal host I'd say "install an ntpd on every host". NTP daemons are light weight and more reliable and stable than something like cron job to re-sync.  Virtual machines are special though.

On a properly configured virtual machine, the system time comes from the VM host. As the guest, you must assume that the host is doing the right thing. The guest VM has a simulated real-time clock (RTC) which is a pass-through either of the host clock or the underlying hardware RTC. In either case, the guest is not allowed to adjust the underlying clock.

Typically a service like ntpd gets time information from outside and not only slews the system (OS) clock but it compares that to the RTC and tries to compensate for drift between the RTC and the "real" time. In the default case it will even adjust the RTC to keep it in line with the system clock and "real" time.

As a guest, it's impolite to go around adjusting your host's clocks.

So a virtual machine system like an IaaS is one of the few places I'd advise against installing a time server.  If your VMs aren't in sync, call your provider and ask them why their hardware clocks are off. If they can't give you a good answer, find a new IaaS provider.

Time Zones and The Cloud


I'm going to throw one more timey-wimey things in here. I set the system timezone on every server host to UTC.  If I ever have to compare logs on servers from different regions of the world (this is the cloud remember?) I don't have to convert time zones.  User accounts can always set their timezone to localtime using the TZ environment variable. The tasks offer an option so that you can override the timezone.

Host Preparation vs. Software Configuration


It would be fairly easy to write a single task that completes all of the bullet points listed above, but something bothers me about that idea. The first 4 are generic host tasks. The last four are distinctly puppet configuration related. Installing the software packages sits on the edge of both. The system tasks are required on every host. Only the puppet master will get the puppet master service software and configuration. The puppet clients will get different software and a different configuration process.

I'm going to take advantage of the design of Thor to create three separate tasks to accomplish the job:

  • origin:prepare - do the common hosty tasks
  • origin:puppetmaster - prepare and then install and configure a master
  • origin:puppetclient - prepare, and then install and register a client

So the origin:prepare task needs to set the hostname on the box to match the FQDN. I prefer also to enable the local firewall service and open a port for SSH to minimize the risk of unexpected exposure. This is also where I'd put a task to add a software repository for the puppet packages if needed.

Each of the origin:puppetmaster and origin:puppetclient tasks will invoke the origin:prepare task first.

File Revision Control


Since Configuration Management is all about control and repeatability it also makes sense to place the configuration files themselves under revision control. For this example I'm going to place the site configuration in a Github repository. Changes can be made in a remote work space and pushed to the repository. ;Then they can be pulled down to the puppet master and the service notified to re-read the configurations. ;They can also be reverted as needed.

When the Puppet site configuration is created on the puppet master, it will be cloned from the git repo on github.

Initialize the Puppet Master

The puppet master process runs as a service on the puppet server. It listens for polling queries from puppet agents on remote machines. The puppet master service must read the site configurations to build the models that will define each host. The puppet service runs as a non-root user and group, each named "puppet". The default location for puppet configuration files is in /etc/puppet.  This area is only writable by the root user. Other service files reside in /var/lib/puppet. This area is writable by the puppet user and group. Further, SELinux limits access by the puppet user to files outside these spaces.

On RHEL6, the EC2 login user is still root. The user and group settings aren't really needed there, but they are still consistent.

The way I choose to manage this is:
  1. Add the ec2-user to the puppet group
  2. Place the site configuration in /var/lib/puppet/site
  3. Update the puppet configuration file (/etc/puppet/puppet.conf) to reflect the change
  4. Clone the configuration repo into the local configuration directory
  5. Symlink the configuration repo root into the ec2-user home directory.
This way the ec2-user has permission and access to update the site configuration.

Puppet uses x509 server and client certificates. The puppet master needs a server certificate and needs to self-sign it before it can sign client certificates or accept connections from clients.

Once the server certificate is generated and signed, I also need to enable and start the puppet master service.  Finally, I need to add a firewall rule allowing inbound connections on the puppet master port, 8140/TCP.

So the process of initializing the puppet master is this:

  • install the puppet master software
  • modify the puppet config file to reflect the new site configuration file location
  • install additional puppet modules
  • generate server certificate and sign it
  • add ec2-user to puppet group (or root user on RHEL6)
  • create site configuration directory and set owner, group, permissions
  • clone the git repository into the configuration directory
  • start and enable the puppet master service

Installing Packages


Since I'm using Thor, the package installation process is a Thor task. Each sub-task will only run once within the invocation of its parent. The origin:puppetmaster task calls the origin:prepare task and provides a set of packages needed for a puppet master in addition to any installed as part of the standard preparation (firewall management and augeas). For the puppet master, these additional packages are the puppet-master and git packages. Dependencies are resolved by YUM.

Adding user to Puppet group


The puppet service is controlled by the root user, but runs as a role user and group both called puppet. I would like the login user to be able to manage the puppet site configuration files, but not to log in either as the root or puppet user. I'll add the ec2-user user to the puppet group, and set the group write permissions so that this user can manage the site configuration.

Creating the Site Configuration Space


As noted above, the ec2-user account will be used to manage the puppet site configuration files. The files must be writable by the ec2-user (through the puppet group) but they must also be readable by the puppet user and service. In addition, since these are service configurations rather than (local) host configuration files, I'd prefer that they not reside in /etc.

SELinux policy restricts the location of files which the puppet service processes can read. One of those locations is in /var/lib/puppet. Rather than update the policy, it seems easier to place the site configuration data within /var/lib/puppet.

I create a new directory /var/lib/puppet/site and set the owner, group and permissions so that the puppet user/group and read and write the files. I also set the permissions so that new files will inherit the group and group permissions. This way the ec2-user will have the needed access, and SELinux will not prevent the puppet master service from reading the files. In a later step I'll use git to clone the site configuration files into place.

Install Service Configuration File (setting variables)


Moving the location of the site configuration files from the default (/etc/puppet/manifests) and adding a location for user defined modules requires updating the default configuration file. Currently I make three alterations to the default file:


  • set the puppet master hostname as needed
  • set the location of the site configuration (manifests)
  • add a location to the modulepath
I use a template file, push a copy to the master and use sed to replace the values before copying the updated file into place.

Installing Standard Modules


Puppet provides a set of standard modules for managing common aspects of clients.  These are installed from a web site on PuppetLabs with the puppet module install command. these are installed before starting the master process.

Unpacking Site Configuration (From git)


I already have a task for cloning a git repository on a remote host.  Unpack the site configurations into the directory prepared previously.  The git repo must have two directories at the top: manifests and modules. These will contain the site configuration and any custom modules needed for OpenShift. These locations are configured into the puppet master configuration above.


Adding Firewall Rules


The puppet master service listens on port 8140/TCP. I need to add an allow rule so that inbound connections to the puppet master will succeed.

Just to be safe I also add an explicit rule to allow SSH (22/TCP) before restarting the firewall service.

These match the securitygroup rule definitions defined in the third post. Some people would question the need for running a host based firewall when EC2 provides network filtering I would refer anyone who asks that to read up on Defense in Depth.

Filtering the Puppet logs into a separate file


It is much easier to observe the operation of the service if the logs are in a separate file. I add an entry to the /etc/rsyslog.d/ directory and restart the rsyslog daemon to place puppet master logs in /var/log/puppet-master.log


Enabling and Starting the Puppet Master Service


Finally, when all of the puppet master host customization is complete, I can enable and start the puppet master service.

What all that looks like


That's a whole long list and I created a whole set of Thor tasks to manage the steps.  Then I created an uber-task to execute it all.  It starts with the result of origin:baseinstance (run with the securitygroups default and puppetmaster). It results in a running puppet master waiting for clients to connect.

thor origin:puppetmaster puppet.infra.example.org --siterepo https://github.com/markllama/origin-puppet
origin:puppetmaster puppet.infra.example.org
task: remote:available puppet.infra.example.org
task: origin:prepare puppet.infra.example.org
task: remote:distribution puppet.infra.example.org
fedora 18
task: remote:arch puppet.infra.example.org
x86_64
task: remote:timezone puppet.infra.example.org UTC
task: remote:hostname puppet.infra.example.org
task: remote:yum:install puppet.infra.example.org puppet-server git system-config-firewall-base augeas
task: puppet:master:join_group puppet.infra.example.org
task: remote:git:clone puppet.infra.example.org https://github.com/markllama/origin-puppet
task: puppet:master:configure puppet.infra.example.org
task: puppet:master:enable_logging puppet.infra.example.org
task: puppet:module:install puppet.infra.example.org puppetlabs-ntp
task: remote:firewall:stop puppet.infra.example.org
task: remote:firewall:service puppet.infra.example.org ssh
task: remote:firewall:port puppet.infra.example.org 8140
task: remote:firewall:start puppet.infra.example.org
task: remote:service:start puppet.infra.example.org puppetmaster
task: remote:service:enable puppet.infra.example.org puppetmaster

You can check that the puppet master has created and signed its own CA certificate by listing the puppet certificates like this:

thor puppet:cert list puppet.infra.example.org --all
task puppet:cert:list puppet.infra.example.org
+ puppet.infra.example.org BD:27:A5:3B:AE:F5:1D:05:7E:8F:E7:E9:CA:BA:32:4B

This indicates that there is now a single certifiicate associated with the puppet master.  This certificate will be used to sign the client certificates as they are submitted.

Initializing a Puppet Client


The first part of creating a puppet client host is the same as for the master (almost).  It involves installing some basic puppet packages (puppet, facter, augeas), setting the hostname and time zone and the rest of the hosty stuff.  Then we get to the puppet client registration.

The puppet agent runs on the controlled client hosts. It polls the puppet master periodically checking for updates to the configuration model for the host.

When the puppet agent starts the first time it generates an x509 client certificate and sends a signing request to the puppet master.

When the puppet master receives an unsigned certificate from an agent for the first time it places it in a list of certificates waiting to be signed. The user can then sign and accept each new client certificate and the initial identification process is complete. From then on the puppet agent polls using its client certificate for identification and the signature provides authentication.

The process then for installing and initializing the puppet client is this:

On the client:
  • install the puppet agent package
  • configure the puppet master hostname into the configuration file
  • enable the puppet agent service
  • start the puppet agent service
Then on the puppet master:
  • wait for the client unsigned certificate to arrive
  • sign the new client certificate
This is what it looks like for the broker host:


thor origin:puppetclient broker.infra.example.org puppet.infra.example.org
origin:puppetclient broker.infra.example.org, puppet.infra.example.org
task: remote:available broker.infra.example.org
task: origin:prepare broker.infra.example.org
task: remote:distribution broker.infra.example.org
fedora 18
task: remote:arch broker.infra.example.org
x86_64
task: remote:timezone broker.infra.example.org UTC
task: remote:hostname broker.infra.example.org
task: remote:yum:install broker.infra.example.org puppet facter system-config-firewall-base augeas
task: puppet:agent:set_server broker.infra.example.org puppet.infra.example.org
task: puppet:agent:enable_logging broker.infra.example.org
task: remote:service:enable broker.infra.example.org puppet
task: remote:service:start broker.infra.example.org puppet
task: puppet:cert:sign puppet.infra.example.org broker.infra.example.org

At this point the client can request its own configuration model and the master will confirm the identity of the client and return the requested information.

thor puppet:cert:list puppet.infra.example.org --all
task puppet:cert:list puppet.infra.example.org
+ broker.infra.example.org 09:97:22:B9:A9:16:AE:B1:32:93:EC:3A:6D:7A:CF:67
+ puppet.infra.example.org 70:B8:E0:C0:F8:5B:48:67:4E:92:91:D2:0D:E4:2B:F4

Repeat the origin:puppetclient step for the data1, message1 and node1 instances you created last time. You did create them, right?  Check the certs as each one registers.

The next step is to actually build a model for the client to request by creating a site manifest and a set of node descriptions.

That means: we finally get to do some OpenShift.