Showing posts with label broker. Show all posts
Showing posts with label broker. Show all posts

Friday, March 22, 2013

Installing (but not configuring) the broker service by hand

I'm working through a totally(?) manual installation of the OpenShift Origin service on Fedora 18. The last post on this topic was about building the RPMs on your own Yum repository. This time I'm going to install the broker service and make a few tweaks that are still required.

One seriously major thing to note is that I don't recommend actually doing this. I'm doing it to shed some light on some of the things still going on in the development process and to highlight the ways in which you can get some visibility into the installation and monitoring of the service.

If you're interested in building and running your own development environment or service for real, I suggest starting by reading through Krishna Raman's article on creating a development environment using Vagrant and Puppet and the puppet script sources themselves to see what's involved.  Finally there's a comprehensive document that describes the procedure with fewer warts.


Ingredients

As usual, I start with a clean minimal install of Fedora 18.  In addition this time I also have a yum repository filled with a bleeding-edge build from source as I described previously.  Finally I have a prepared MongoDB server waiting for a connection.

I'm replacing my real URLs and access information with dummies for demonstration purposes.


  • Yum repo URL
    http://myrepo.example.com/origin-server
  • MONGO_HOST_PORT="mydbhost.example.com:27017"
  • MONGO_USER="openshift"
  • MONGO_PASSWORD="dontuseme"
  • MONGO_DB="openshift"

Preparation

Since I'm building my own packages from source and placing them in a Yum repository, I need to add that repo to the standard set. I'll add a new file to /etc/yum.repod.d referring to my yum server.

Even if you're building from your own sources, there are still some packages you need to get that aren't in either the stock Fedora repositories or in the OpenShift sources. These are generally packages with patches that are in the process of moving upstream or are in the acceptance process for Fedora. Right now a set is maintained by the OpenShift build engineers. I need to add the repo file for that too:

[origin-server]
name=OpenShift Origin Server
baseurl=http://myrepo.example.com/openshift-origin
enable=1
gpgcheck=0
[origin-extras]
name=Custom packages for OpenShift Origin Server
baseurl=https://mirror.openshift.com/pub/openshift-origin/fedora-18/x86_64/
enable=1
gpgcheck=0
At this point you can install the openshift-origin-broker package.
yum install openshift-origin-broker
...
  urw-fonts.noarch 0:2.4-14.fc18                                                
  v8.x86_64 1:3.13.7.5-1.fc18                                                   
  xorg-x11-font-utils.x86_64 1:7.5-10.fc18                                      

Complete!


There are a set of Rubygems that are not yet packaged as RPMs. I need to install these as gems for now.

gem install mongoid
Fetching: i18n-0.6.1.gem (100%)
Fetching: moped-1.4.4.gem (100%)
Fetching: origin-1.0.11.gem (100%)
Fetching: mongoid-3.1.2.gem (100%)
Successfully installed i18n-0.6.1
Successfully installed moped-1.4.4
Successfully installed origin-1.0.11
Successfully installed mongoid-3.1.2
3 gems installed
Installing ri documentation for moped-1.4.4...
Building YARD (yri) index for moped-1.4.4...
Installing ri documentation for origin-1.0.11...
Building YARD (yri) index for origin-1.0.11...
Installing ri documentation for mongoid-3.1.2...
Building YARD (yri) index for mongoid-3.1.2...
Installing RDoc documentation for moped-1.4.4...
Installing RDoc documentation for origin-1.0.11...
Installing RDoc documentation for mongoid-3.1.2...
There are a number of gem version restrictions in the broker Gemfile which are not met by the current rubygem RPMs.  I have to remove the version restrictions so that the broker application will use what is available. This risks breaking things due to interface changes, but will at least allow the broker application to start.

sed -i -f - <<EOF /var/www/openshift/broker/Gemfile
/parseconfig/s/,.*//
/minitest/s/,.*//
/rest-client/s/,.*//
/mocha/s/,.*//
/rake/s/,.*//
EOF


For some reason, even with the --without clause for :test and :development, bundle still wants the mocha rubygem.  This should not be required for production, but right now you need to install it so that the Rails application will start.

yum install rubygem-mocha
...
Installed:
 rubygem-mocha.noarch 0:0.12.1-1.fc18

Dependency Installed:
  rubygem-metaclass.noarch 0:0.0.1-6.fc18

 Verifying The Dependencies

Now that all of the software dependencies have been installed (mostly by RPM requirements through Yum, and finally through gem requirements and some version tweaking of the Gemfile) I can check that all of them resolve when I start the application. Rails will call bundler when the application starts so I'll call it explicitly before hand. I'm only interested in the production environment, so I'll explicitly exclude development and test.

cd /var/www/openshift/broker
bundle --local
Using rake (0.9.6) 
Using bigdecimal (1.1.0)
....
Using systemu (2.5.2)
Using xml-simple (1.1.2)
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.

If I try to start the rails console now, though, I'll be sad. It won't connect to the database.

Configure MongoDB access/authentication

The OpenShift broker is (right now) tightly coupled to MongoDB. Recently it switched to using the rubygem-mongoid ODM module (which is a definite plus if you have to work on the code).

The last thing I need to do before I can fire up the Rails console with the broker application is to set the database connectivity parameters. One side effect of using an ODM is that it establishes a connection to the database the moment the application starts.

NOTE: when this is done I will not have a complete working broker server. I still need to configure the other external services: auth, dns and messaging.

Set the values listed in the Ingredients into /etc/openshift/broker.conf.

/etc/openshift/broker.conf
...
# Eg: MONGO_HOST_PORT="<host1:port1>,<host2:port2>..."
MONGO_HOST_PORT="mydbhost.example.com:27017"
MONGO_USER="openshift"
MONGO_PASSWORD="dontuseme"
MONGO_DB="openshift"
MONGO_SSL="false"
...

Now I can try starting the rails console. It should connect to the mongodb and offer an irb prompt:


To verify the database connectivity, take a look at this recent blog post.

Next up is configuring each plugin, one by one.

Gist Scripts

I'm trying something new.  Rather than including code snippets inline, I'm going to post them as Github Gist entries.

References

Thursday, March 21, 2013

Verifying the MongoDB DataStore with the Rails Console: Mongoid Edition

A few months ago I did several posts about how to verify the operation of the back end services of an OpenShift Origin broker service.   Today I discovered that this one (mongod) is obsolete.

The data store behind the broker is a MongoDB.  That one back end service isn't pluggable.  It's actually been made more tightly coupled to Mongo, but in this case that's a good thing.  What changed is that all of the Rails application model objects have been converted to use the Mongoid ODM rubygem.  All of the object persistence is now managed in the background and all of the logic can just deal with the objects as... well... objects.

There are a couple of implications for broker service verification.

  1. The broker connects to the database on startup
    This means that if the database access/auth information is wrong, the rails app will fail to start.
  2. The only simple way to test the connection is to create an object and observe the database.
    This is both simpler to do, and potentially more difficult to diagnose on failure.

I think the second point won't be as much of a downside as I would fear at first.  I suspect that if connectivity is good, the rest will be.  If it's not, it will be fairly clear why.

Configuring the Broker Data Store


Configuring the datastore access information hasn't changed.  The configuration information is still stored in /etc/openshift/broker.conf. The settings all have the MONGO_ prefix:

MONGO_HOST_PORT="data1.example.com:27017"
MONGO_USER="openshift"
MONGO_PASSWORD="dontuseme"
MONGO_DB="openshift"
MONGO_SSL="false"

Adjust these for your mongodb implementation. Remember to open the firewall for the broker on your database host.  Configure the database to listen and test the connectivity locally.

Verifying Simple Connectivity


You also want to check the connectivity from your broker host before trying to fire up the broker itself.

broker> echo "show collections" | mongo --username openshift --password dontuseme data1.example.com:27017/openshift
MongoDB shell version: 2.2.3
connecting to: data1.example.com:27017/openshift
system.indexes<
system.users
bye

You can do this repeatedly and observe the mongodb log on the database host.

Observing the Mongo Database Logs

On the database host, take a look at the mongodb logs. You should see a new entry (successful or failed) each time a client connects.

data1> tail /var/log/mongodb/mongodb.log
Thu Mar 21 20:24:26 [conn15] authenticate db: openshift { authenticate: 1, nonce: "20d6f85f33f03dee", user: "openshift", key: "60639c7ce56851a25be56bcebd98c3ed" }

Starting the Rails Console


Now that you're sure that the database is running and accessible from your broker host you can try firing up the Rails console. This assumes that you've resolved all of the gem requirements. If not, the Rails console will complain about them and exit.

broker> cd /var/www/openshift/broker
broker> rails console
Loading production environment (Rails 3.2.8)
irb(main):001:0>

If you go this far you should have seen one more authentication log record on the mongodb server. (see above)

Create a Database Object


Now we can create a CloudUser object and watch it appear in the database

irb(main):001:0> user = CloudUser.create(login: "testuser")
=> #<CloudUser _id: 514b6f6cf3da7fa491000001, created_at: 2013-03-21 20:37:00 UTC, updated_at: 2013-03-21 20:37:00 UTC, login: "testuser", capabilities: {"subaccounts"=>false, "gear_sizes"=>["small"], "max_gears"=>100}, parent_user_id: nil, plan_id: nil, pending_plan_id: nil, pending_plan_uptime: nil, usage_account_id: nil, consumed_gears: 0>

You can see that this is more than your typical Ruby object. The ID and created_at and updated_att fields are artifacts of the ODM persistence.  You won't see another log message because the database connection is persistent using the ODM.  You will find that there's now a document in the openshift.cloud_user collection.

data> echo "db.cloud_users.find()" | mongo --username openshift --password dbsecret localhost/openshift
MongoDB shell version: 2.2.3<
connecting to: localhost/openshift
{ "_id" : ObjectId("514b6f6cf3da7fa491000001"), "consumed_gears" : 0, "login" : "testuser", "capabilities" : { "subaccounts" : false, "gear_sizes" : [ "small" ], "max_gears" : 100 }, "updated_at" : ISODate("2013-03-21T20:37:00.546Z"), "created_at" : ISODate("2013-03-21T20:37:00.546Z") }
bye


Removing the Test Object


Cleaning up is just as easy

irb(main):002:0> user.delete<
=> true

And to verify that it's been removed:

echo "db.cloud_users.find()" | mongo --username openshift --password dbsecret localhost/openshift
MongoDB shell version: 2.2.3
connecting to: localhost/openshift
bye

At this point you know both that your database is running and that the broker application can connect and read and write it.

Much simpler with an ODM.

References:


Wednesday, December 5, 2012

Verifying the DNS Plugin using Rails Console

Each of the OpenShift broker plugins provides an interface implementation class for the  plugin's abstract behavior.   In practical terms this means that I can fire up the rails console, create an instance of the plugin class and then use it to manipulate the service behind the plugin.

Since the DNS plugin has the simplest interface and Bind has the cleanest service logs, I'm going to demonstrate with that.  The technique is applicable to the other back-end plugin services.

Preparing Logging


To make life easy I'm going to configure logging on the DNS server host so that the logs from the named service are written to their own file.

A one line file in /etc/rsyslog.d will do the trick:

if $programname == 'named' then /var/log/named.log

Write that into /etc/rsyslog.d/00_named.conf and restart the rsyslog service.  Then restart the named service and check that the logs are appearing in the right place.

If I didn't filter out the named logs, I could still use grep on /var/log/messages to extract them.

Configuring the Bind DNS Plugin


As indicated in previous posts, the OpenShift DNS plugin is enabled by placing a file in /etc/openshift/plugins.d with the configuration information for the plugin.  The name of the file must be the name of the rubygem which implements the plugin with the suffix .conf. The Bind plugin is configured like this:

/etc/openshift/plugins.d/openshift-origin-dns-bind.conf

BIND_SERVER="192.168.5.11"
BIND_PORT=53
BIND_KEYNAME="app.example.com"
BIND_KEYVALUE="put-your-hmac-md5-key-here"
BIND_ZONE="app.example.com"

When the Rails application starts, it will import a plugin module for each .conf file and will set the config file values.

The Rails Console


Ruby on Rails has an interactive testing environment.  It it started by invoking rails console from the root directory of the application.  If I start the rails console at the top of the broker application I should be able to instantiate and work with the plugin objects.

The rails console command runs irb to offer a means of manual testing.  In addition to the ordinary ruby script environment it imports the Rails application environment which resides in the current working directory. Among other things, it processes the Gemfile which, in the case of the OpenShift broker, will load any plugin gems and initialize them. I'm going to use the Rails console to directly poke at the back end service objects.

I'm going to go to the broker application directory.  Then I'll check that bundler confirms the presence of all of the required gems.  Then I'll start the Rails console and check the plugin objects manually.

cd /var/www/openshift/broker
bundle --local
....
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
rails console
Loading production environment (Rails 3.0.13)
irb(main):001:0> 

The last line above is the Rails console prompt.

Creating a DnsService Object


The OpenShift::DnsService class is a factory class for the DNS plugin modules. It also contains an interface definition for the plugin, though Ruby and Rails don't seem to be much into formal interface specification and implementation.  The plugin interface definitions reside in the openshift-origin-controller rubygem:

https://github.com/openshift/origin-server/tree/master/controller/lib/openshift

The factory classes provide two methods by convention: provider=() sets the actual class which implements the required interface and instance() is the factory method, returning an instance of the implementing class.  They also have a private instance variable which will contain a reference to the instantiating class.  When the plugins are loaded, a reference to the instantiating class is set into the factory class.

Once the broker application is loaded using the rails console I should be able to create and work with instances of the DnsService implementation.

The first step is to check that the factory class is indeed loaded and has the right provider set. Since I can just type at the irb prompt it's easy to see what's there.

irb(main):001:0> OpenShift::DnsService
=> OpenShift::DnsService
irb(main):002:0> d = OpenShift::DnsService.instance
=> #<OpenShift::BindPlugin:0x7f540dfb9ee8 @zone="app.example.com",
 @src_port=0, @server="192.168.5.2",
 @keyvalue="GwhJNLZPghbpTya2M6N+lvcLmBQx6TYbuH7j6TPyetE=",
 @port=53, @keyname="app.example.com",
 @domain_suffix="app.example.com">

Note that the class is OpenShift::BindPlugin and the instance variables match the values I set in the plugin configuration file. I now have a variable d which refers to an instance of the DNS plugin class.

The DnsService Interface


The DNS plugin interface is the simplest of the plugins.  It contains just four methods:
  • register_application(app_name, namespace, public_hostname)
  • deregister_application(app_name, namespace)
  • modify_application(app_name, namespace, public_hostname)
  • publish()
All but the last will have a side-effect which I can check by observing the named service logs and by querying the DNS service itself.

Note that the publish() method is not included in the list with side-effects.  publish() is always called at the end of a set of change calls.  It is there to accommodate batch update processing.  Third party DNS services which use use web interfaces may require batch processing. The OpenShift::BindPlugin submits changes instantly.

Change and Check


The process of testing now will consist of three repeated steps:

  1. Make a change
  2. Check the DNS server logs
  3. Check the DNS server response

I will repeat the steps once for each method. (though I'll only show a couple of samples here)

The logs are time-stamped.  To make it easier to find the right log entry, I'll check the time sync of the broker and DNS server hosts, and then check the time just before issuing each update command.

First I check the date and add an application record.  An application record is a DNS CNAME record which is an alias for the node which contains the application. Here goes:

irb(main):003:0> `date`
=> "Wed Dec  5 15:25:59 GMT 2012\n"
irb(main):002:0> d.register_application "testapp1", "testns1", "node1.example.com"
=> ;; Answer received from 192.168.5.11 (129 bytes)
;;
;; Security Level : UNCHECKED
;; HEADER SECTION
;; id = 25286
;; qr = true    opcode = Update    rcode = NOERROR
;; zocount = 1  prcount = 0  upcount = 0  adcount = 1

OPT pseudo-record : payloadsize 4096, xrcode 0, version 0, flags 32768

;; ZONE SECTION (1  record)
;; app.example.com. IN SOA

The register_application() method returns the Dnsruby::Message returned from the DNS server.  A little digging should indicate that the update was successful.

Next I'll examine the named service log on the DNS server host.

tail /var/log/named.log
...
Dec  5 15:26:41 ns1 named[11178]: client 10.16.137.216#54040/key app.example.com: signer "app.example.com" approved
Dec  5 15:26:41 ns1 named[11178]: client 10.16.137.216#54040/key app.example.com: updating zone 'app.example.com/IN': adding an RR at 'testapp1-testns1.app.example.com' CNAME

Finally, I'll check that the server is answering queries for that name:

dig @ns1.example.com testapp1-testns1.app.example.com CNAME

; <<>> DiG 9.9.2-rl.028.23-P1-RedHat-9.9.2-8.P1.fc18 <<>> @ns1.example.com testapp1-testns1.example.com CNAME
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10884
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;testapp1-testns1.example.com. IN CNAME

;; AUTHORITY SECTION:
example.com.  10 IN SOA ns1.example.com. hostmaster.example.com. 2011112904 60 15 1800 10

;; Query time: 3 msec
;; SERVER: 192.168.1.11#53(192.168.1.11)
;; WHEN: Thu Dec  5 15:28:41
;; MSG SIZE  rcvd: 108

That's sufficient to confirm that the DNS Bind plugin configuration is correct and that updates are working. In a real case I'd go on and check each of the operations. for Now I'll just delete the test record and go on.


d.deregister_application "testapp1", "testns1"
=> ;; Answer received from 192.168.5.11 (129 bytes)
;;
;; Security Level : UNCHECKED
;; HEADER SECTION
;; id = 26362
;; qr = true    opcode = Update    rcode = NOERROR
;; zocount = 1  prcount = 0  upcount = 0  adcount = 1

OPT pseudo-record : payloadsize 4096, xrcode 0, version 0, flags 32768

;; ZONE SECTION (1  record)
;; app.example.com. IN SOA



dig @ns1.example.com testapp1-testns1.app.example.com CNAME
; <<>> DiG 9.9.2-rl.028.23-P1-RedHat-9.9.2-8.P1.fc18 <<>> @ns1.example.com testapp1-testns1.example.com CNAME
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 50598
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;testapp1-testns1.example.com. IN CNAME

;; AUTHORITY SECTION:
example.com.  10 IN SOA ns1.example.com. hostmaster.example.com. 2011112904 60 15 1800 10

;; Query time: 3 msec
;; SERVER: 192.168.5.11#53(192.168.5.11)
;; WHEN: Thu Dec  5 15:31:20
;; MSG SIZE  rcvd: 108

This is what a negative response looks like. There's a question section but no answer section.
Things are back where I started and I can move on to the next test.

Resources