Monday, November 19, 2012

OpenShift Back End Services: Messaging (ActiveMQ)

In the previous post I detailed creating a back end MongoDB service which is ready for the OpenShift Origin broker to use.

The second back end service I'm going to set up is the messaging service.  This service connects the broker to the nodes.  It carries commands from the broker to the nodes and provides a means for the broker to query the node status.

Ingredients

As with the previous posts there are several settings or variables for which we'll need values.

Message Server Setup Information
VariableValue
Message Server Hostnamemsg1.example.com
Message Server IP address192.168.5.8
ActiveMQ console admin passwordadminsecret
ActiveMQ messaging admin account passwordmsgadminsecret
OpenShift messaging account namemcollective
OpenShift messaging account passwordmarionette


Messaging 101

If you're already familiar with HPC or CMS messaging, you can skip this bit.

Most people today are familiar with a form of messaging. Whether it's SMS (cell phone text messages), commercial Instant Messaging (IM), Twitter or Internet Chat Relay (IRC), we all get the concept of writing a message, attaching an address to it and sending the message.  We expect that the message will arrive at the destination, will be read, and if necessary, the receiver will compose and send a reply back.

Most SMS or IM are addressed to a single destination, but most people are also familiar with chat rooms, a form of broadcast message.  Each of the participants connects to (subscribes) the "room" or "channel".  Any message sent to the room is forwarded to all of the participants (subscribers).  Twitter demonstrates another model. Users "follow" a topic or other user represented by a "hash tag" or "at tag". Any message containing a tag in a user's follow list regardless of who sent it, is delivered to the user.  These "everyone gets it" systems use what is called a flooding model.

The other common uses for messaging are less visible to the public.  Computers use messaging to create super computers (HPC, or High Performance Computing).  A large computation is broken down into smaller parts which are distributed across hundreds or thousands of computers.  Each of the participating computers runs an "agent" which listens for messages and can run local processes in response. The individual computers receive messages from a controller which instructs them how to process their one little part.  Then they send the result back to the controller in another message.  These distributed systems create animated films, weather reports, airline reservations and results in most sciences.

These computer messaging services have been adapted for yet another task.  They are commonly used now in computer Configuration Management Systems, such as cfengine, Puppet, bcfg2, Chef and others.  In large enterprise computer systems the agent running on the participating computers is designed to update the computer configuration on command.

All of these messaging systems (public human services and computer communications services) have a set of common elements.  All have "publishers" and "subscribers" (senders and receivers).  They also have one or more "message brokers" configured into a mesh or "grid". The message brokers are responsible for listening for new messages and distributing them to the subscribed listeners.

Computer messaging systems add a significant alternative to the "flooding" model where every message goes to every subscriber.  Message channels which use the flooding behavior are called "topics".

Computer messaging also uses a model called a "message queue".  In a message queue, the subscribers indicate the ability to handle a certain kind of message.  The message sender submits a message (or request) to the queue. Each message is delivered to exactly one subscriber who processes the message and responds when the processing is done.  The message provider doesn't know which subscriber will pick up the message and doesn't care so long as each message gets processed.  This is only significant to us because it means we actually have to define two things (a topic and a queue) and not just one in the configuration.

Messaging in Openshift

In the OpenShift service, the OpenShift broker and node are the publisher and subscriber.  The message broker(s) sit between.

OpenShift messaging is two layers deep.  The MCollective service is an abstracted RPC mechanism.  It runs on both ends of the messaging system. It relies on an underlying message passing service to do the message routing and delivery.  I'll be careful always to distinguish between the OpenShift broker (which runs the openshift service) and the message broker (which carries communications between the Openshift broker and the Openshift nodes).

OpenShift only interacts directly with the MCollective service.  It is unaware of what the underlying communications mechanism is.  MCollective can use one of several different message brokers. Since OpenShift doesn't care, you can choose which ever one suits your needs best.  The most common message broker implementations are RabbitMQ and ActiveMQ which use the Stomp protocol.  You can also use message broker which implements the AMQP protocol, such as QPID.  I'm going to use the ActiveMQ message broker service.

This diagram highlights where the messaging service sits in the Openshift Origin Service and indicates the limits of what I'm working here.


The ActiveMQ messaging service

ActiveMQ is a Java based service.  You can find it on github and it will be properly packaged for Fedora 18 and RHEL 6.4. I'm going to assume you can install it with yum.

Like most Java services, ActiveMQ configuration is formatted as XML along with a set of property files. The configuration files reside in /etc/activemq. The primary configuration file is /etc/activemq/activemq.xml.
I'm also going to configure a management interface which uses something called jetty. I'll need to modify the jetty.xml and jetty-realm.properties configuration files.

There are four things that need configuration in the activemq.xml file:
  • Set the (message) broker name that this service will report when someone connects.
  • Create user accounts for access control
  • Create message queues and topics. Assign access permissions to user accounts.
  • Enable protocol listeners
Unlike common public messaging services, you can't add topics, queues or users on-the-fly.  This is largely for security reasons. In a case like Openshift Origin it doesn't matter as we know a priori the topics we want.

ActiveMQ actually provides several baseline configurations for different protocols. Specifically they provide one for the Stomp protocol which is preferred by MCollective. The baseline configuration file for Stomp is called activemq-stomp.xml. I'm going to start configuring ActiveMQ by saving a copy of the default configuration file and replacing it with the Stomp baseline file.

cp /etc/activemq/activemq.xml /etc/activemq/activemq.xml.orig
cp /etc/activemq/activemq-stomp.xml /etc/activemq/activemq.xml

The first change to make to the activemq.xml is to set the (message) brokerName. The default value is "localhost".  We want it to be the fully qualified domain name of the message broker host; "msg1.example.com". This is an sed one liner.

sed -i -e '/<broker/s/brokerName=".*"/brokerName="msg1.example.com"/' /etc/activemq/activemq.xml

The XML schema for ActiveMQ is a bit strange. It requires that the section tags be in alphabetical order.
In yet another case of Word Overloading the authentication and authorization (topic/queue definition) sections are called "plugins". Fortunately, by using the activemq-stomp.xml file as the base, all of the changes we need to insert are confined to a single section delimited by the <plugins> tags.

  <!-- add users for mcollective -->
  
         <plugins>
           <statisticsBrokerPlugin/>
           <simpleAuthenticationPlugin>
             <users>
               <!-- change the username and password -->
               <authenticationUser username="mcollective" password="marionette" groups="mcollective,everyone"/>
               <authenticationUser username="admin" password="msgadminsecret" groups="mcollective,admin,everyone"/>
             </users>
           </simpleAuthenticationPlugin>
 
           <authorizationPlugin>
             <map>
               <authorizationMap>
                 <authorizationEntries>
                   <authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
                   <authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
                   <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
 
                   <!-- these maybe should be "openshift" but.... -->
                   <authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
                   <authorizationEntry queue="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
 
                 </authorizationEntries>
               </authorizationMap>
             </map>
           </authorizationPlugin>
         </plugins>

This section must be added after the </persistenceAdaptor> close tag and before the <transportConnectors> open tag.

The <authenticationUser /> tags each define a username, password and group memberships for a messaging user. I'm defining two users; admin and mcollective, and adding the mcollective user to an mcollective group.

The <authorizationEntry /> tags define message queues and topics including the permissions and membership. The first three are the admin and control topics. The last two are used by the OpenShift Origin service.

Monitoring and Statistics Interface

The ActiveMQ service offers an HTML and REST interface for monitoring the messaging service. I'm going to enable that so I can use it to check on the status of the messaging service after I have it started. The monitoring service is configured with the jetty.xml and jetty-realm.properties files in the /etc/activemq directory. In addition to enabling the monitoring service, I want to make sure that it is secure. I'm going to restrict network access to the localhost interface and reset the admin password.

In the jetty.xml file I can make the changes with two sed one liners:

sed -i -e '/"authenticate"/s/value=".*"/value="true"/' jetty.xml
sed -i -e '/name="port"/a\ <property name="host" value="127.0.0.1" />' jetty.xml

The final change is to reset the admin password for the monitoring service interface.  This is in the jetty-realm.properties file. Each line of the file contains a single user/password entry.  Again, a sed one liner will do the trick:

sed -e '/^admin:/s/: .*,/: adminsecret,/' jetty-realm.properties.orig

Where "adminsecret" is the new password. You pick your own value.

That should be enough to get the ActiveMQ service running and ready for OpenShift Origin. Now I have to turn it on and verify it.

Starting and Verifying the ActiveMQ service

The final steps are to try the service and verify that it is working as needed.

Starting And Enabling ActiveMQ service

The ActiveMQ is enabled and controlled just like any other standard service on RHEL or Fedora:

service activemq status
ActiveMQ Broker is not running.

service activemq start
Starting ActiveMQ Broker...

service activemq status
ActiveMQ Broker is running (18149).

chkconfig activemq on

chkconfig --list activemq
activemq        0:off 1:off 2:on 3:on 4:on 5:on 6:off

Checking the Administrative Service Interface

First, check that the admin user can fetch data from the admin interface.  The curl command below means "request the header from the root page from localhost TCP port 8161 with username 'admin' and password 'adminsecret'".  If the service is running and answering, and if the port, username and password are correct I expect to get a response like this:

curl --head -u admin:adminsecret http://localhost:8161
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 6482
Last-Modified: Wed, 02 May 2012 12:07:14 GMT
Server: Jetty(7.6.1.v20120215)

The next thing to do is to check that admin service is reporting the available queues and topics.  Until each queue or topic has been used, the list will be empty, but at least the service will tell you that.

I'll modify the curl command slightly. Instead of asking for the HTTP header, I'll ask for a specific page:

curl -u admin:adminsecret http://localhost:8161/admin/xml/topics.jsp
...
<topics>
...
</topics>

I can do the same thing for the queues and subscribers lists by replacing the "topics" in the command aboce with "queues" or "subscribers".

Viewing the Administrative Service in a Web Browser

I want to be able to view the ActiveMQ status console in a web browser.  Since I limited the Jetty service to the localhost interface (127.0.0.1) in the jetty.xml file, I need to forward the admin interface port (8161) back to my workstation.

ssh -L8161:localhost:8161 msg1.example.com

When I've logged into the message host like this, I can then browse to http://localhost:8161 and I will see the ActiveMQ Administrative Console.

Verifying the Message Service Listeners

When the ActiveMQ service is running, there should be a listener  bound to port 61613.  I'll check that with the ss command (replacement for netstat)

ss -listening --tcp | grep 61613
0      128                         :::61613                        :::



The last step is to verify that the broker host and nodes will be able to connect to the service. I use the telnet client to test it.  Telnet is often not installed by default but it is an extremely useful tool for testing TCP connections.   It is NOT a recommended tool for logging into servers anymore as the contents are sent in clear text.

From the broker, I telnet to the message host on the STOMP port (61613)

telnet msg1.example.com 61613
Trying 192.168.1.21...
Connected to msg1.example.com.
Escape character is '^]'.
^]quit

Note that the escape character '^]' actually means "hit the ESC key and then the right bracket keys". at the telnet> prompt enter quit and carriage return and telnet will disconnect and exit.

Summary

At this point I have a running ActiveMQ service which offers the mcollective topic and queue.  Providers and Subscribers should be able to connect and communicate if they provide the correct username and password.

A note on Security

Encryption in messaging systems can be quite complex.  Depending on the needs of the service the messages may be encrypted end-to-end, or on the connections between the end points and the first message server, or between the message servers or all of these.

The complexity warrants a post of it's own.  The node completed here is not encrypted and sensitive data should not be sent though it over untrusted networks.

A later post will cover encrypting the messaging communications.

References

5 comments:

  1. Thanks! Finally a comment? I hope it was useful. Was there anything that could be clearer or which needs more (or less) coverage?

    - Mark

    ReplyDelete
  2. Doesn't look like ActiveMQ made it into Fedora 18. OpenShift Enterprise bundles it just for use in the product; not sure when it might get into RHEL proper.

    I did not know the XML elements had to be in alphabetical order. That's... interesting.

    The verification steps here are particularly useful!

    ReplyDelete
  3. Thank you.
    Good Article.
    Which ActiveMQ version you tested ? Any sample code ?

    ReplyDelete
    Replies
    1. I don't think the version particularly matters in this case. There isn't any particular code related either as, for OpenShift, ActiveMQ is just the carrier for the MCollective RPC messages. The Mcollective code is in the Github repository at http://github.com/openshift/origin-server/plugins/messaging/

      Delete
    2. Thanks for sharing the quite useful info, however i did not understand whether the openshift BSN's are synchronous or asynchronous messaging. Firstly, when a user wants to create an application then i know broker request the app creation using mcollective agent and through BSN's the requests are sent to mcollective server on the node to create an app on one of the node(which ever responds first). Secondly, when an existing application has to scale up Head HA proxy when it receives 16 concurrent requests it sends a request to broker asking if it has capacity to spin up another instance (this request is routed via mcollective and activemq). if this uses the topics " what topic does this use while the request is initiated by component on the node(subscriber) is sending request to broker(publisher)

      Delete