Thursday, December 6, 2012

Verifying the MongoDB DataStore with the Rails Console

UPDATE: The broker data model has switched to using the Mongoid ODM rubygem.  This significantly improves the consistency of the broker data model and simplifies coding broker objects.  It also obsoletes this post.

See the new one on Verifying the Mongod DataStore with the Rails Console: Mongoid Edition


In the last post I showed how I'd verify the configuration of the OpenShift Bind DNS plugin using the Rails console.  In this one I'll do the same thing for the DataStore  back end service. (not strictly a plugin, but hey...).

DataStore Configuration


Right now the DataStore back end service is not pluggable.  The only back end service available is MongoDB. I've posted previously on how to prepare a MongoDB service for OpenShift.  Now I'm going to work it from the other side and demonstrate that the communications are working.

Since the DataStore isn't pluggable, it isn't configured from the /etc/openshift/plugins.d directory.  Rather it has it's own section in the /etc/openshift/broker.conf (or broker-dev.conf).

This is just the relevant fragement from broker.conf

...
#Broker datastore configuration
MONGO_REPLICA_SETS=false
# Replica set example: "<host-1>:<port-1> <host-2>:<port-2> ..."
MONGO_HOST_PORT="data1.example.com:27017"
MONGO_USER="openshift"
MONGO_PASSWORD="dbsecret"
MONGO_DB="openshift"
...

These are the values that will be used when the broker application creates an OpenShift::DataStore (and OpenShift::MongoDataStore) object.

The DataStore: Abstract and Implementation

At one point the OpenShift::DataStore was intended to be pluggable.  At some point the concept of an abstracted interface was dropped and the tightly bound MongoDB interface was allowed to grow organically. The remains of the original pluggable interface are still there.  Both source files live now in the openshift-origin-controller rubygem package.


The OpenShift::DataStore class still follows the plugging conventions.  It implements the provider=() and instance() methods.  The first takes a reference to a class that "implements the datastore interface" and the second provides an instance of the implementation class all pre-configured from the configuration file.

Observing MongoDB


Unlike named MongoDB writes to its own log by default.  The logs reside in /var/log/mongodb/mongodb.log. (as controlled by the logpath setting in /etc/mongodb.conf.) Verbose logging is controlled in the mongodb.conf as well.  For this demonstration I'm going to enable that by uncommenting the line in /etc/mongodb.conf and restarting the mongod service.

MongoDB also has a command line tool that can be used to interact with the database as well.  The CLI tool is called mongo. I can invoke it like this:

mongo --username openshift --password dbsecret data1.example.com/openshift
MongoDB shell version: 2.0.2
connecting to: data1.example.com/openshift
> show collections;
system.indexes
system.users

This shows an initialized database, but no OpenShift data has been stored yet. The two existing collections are the system collections.  OpenShift will add collections as needed to store data.

With these two mechanisms I can observe and verify access and updates from the broker to the database through the OpenShift::DataStore object.

Creating an OpenShift::DataStore Object


I'm going to create the OpenShift::DataStore object in the same way I did with the OpenShift::DnsService object.  I call the instance() method on the OpenShift::DataStore object.

cd /var/www/openshift/broker
rails console
Loading production environment (Rails 3.0.13)
irb(main):001:0> store = OpenShift::DataStore.instance
=> #<OpenShift::MongoDataStore:0x7f9e42ed4918
 @host_port=["data1.example.com", 27017], @db="openshift", @user="openshift",
 @replica_set=false, @password="dbsecret",
 @collections={:application_template=>"template", :user=>"user", :district=>"district"}>
irb(main):002:0>

Now I have a variable named db which contains a reference to an OpenShift::MongDataStore object. I can see from the instance variables that it is configured for the right host, port, database, user etc.

Checking Communications: Read


Now that I have something to work with its time to see if it will talk to the database.

The interface to the DataStore is much more complex than the DnsService interface is.  Since we're only checking connectivity that's not a problem.  Once I've checked connectivity, I can craft more checks of the DataStore methods themselves later.

The DataStore has a couple of methods that expose the Mongo::DB class that's underneath.  With that I can force a query for the list of collections currently available in the database. If the broker service has not yet been run and users and applications created then only the system collections will exist.  In the example below there are only two collections.


rails console
Loading production environment (Rails 3.0.13)
irb(main):001:0> store = OpenShift::DataStore.instance
=> #<OpenShift::MongoDataStore:0x7f1ac50ac698
 @host_port=["data1.example.com", 27017], @db="openshift",
 @user="openshift", @replica_set=false, @password="dbsecret",
 @collections={:application_template=>"template", :user=>"user", :district=>"district"};gt;
irb(main):002:0> collections = store.db.collections
=> [#<Mongo::Collection:0x7f1ac5096ac8 @cache_time=300, 
...
 @pk_factory=BSON::ObjectId>]
irb(main):003:0> collections.size
=> 2
irb(main):004:0> collections[0].name
=> "system.users"
irb(main):005:0> collections[1].name
=> "system.indexes"

On the MongoDB host I can confirm that there are indeed two collections.

mongo --username openshift --password dbsecret data1.example.com/openshift
MongoDB shell version: 2.0.2
connecting to: data1.example.com/openshift
> show collections
system.indexes
system.users

Finally I can check that the broker app really did issue that query and get a response:


...
Thu Dec  6 14:06:29 [conn2] Accessing: openshift for the first time
Thu Dec  6 14:06:29 [conn2]  authenticate: { authenticate: 1, user: "openshift",
 nonce: "d2083e4185cb7d22", key: "c7c3628fe64eb1aedaaf4c87a4d5e723" }
Thu Dec  6 14:06:29 [conn2] command openshift.$cmd command: { authenticate: 1, u
ser: "openshift", nonce: "d2083e4185cb7d22", key: "c7c3628fe64eb1aedaaf4c87a4d5e
723" } ntoreturn:1 reslen:37 5ms
Thu Dec  6 14:06:29 [conn2] query openshift.system.namespaces nreturned:3 reslen
:142 0ms
...

Checking Communications: Write


Now that I'm convinced that I'm connecting to the right database and I'm able to make queries, the next check is to be sure I can write to it when needed.

Since the database has not yet been used, it's empty.  I want to be careful regardless not to mess with any real OpenShift collections. I'll create a test collection, write a record to it, read it back and drop the collection again.  If I do this in a consistent way I can use this test at any time to check connectivity without danger to the service data.

 I'm using the ruby Mongo classes underneath the OpenShift::MongoDataStore class, so I'll have to look there for the syntax. The Mongo::DB class has a create_collection() method which will do the trick.  I'll issue the command in the rails console, then check the MongoDB logs and view the list of collections using the mongo CLI tool.

Create a Collection


First, the create query (entered into an existing rails console session):

irb(main):005:0> store.db.create_collection "testcollection"
=> #<Mongo::Collection:0x7f76567e9ae0 @cache_time=300,...
...
 @name="testcollection", @logger=nil, @pk_factory=BSON::ObjectId>
irb(main):006:0>

Next I'll check logs:

...
Thu Dec  6 14:37:13 [conn4] run command openshift.$cmd { authenticate: 1, user: 
"openshift", nonce: "665cedb4baf82b0d", key: "eec7b08761151c858c14058c2629dee6" 
}
Thu Dec  6 14:37:13 [conn4]  authenticate: { authenticate: 1, user: "openshift",
 nonce: "665cedb4baf82b0d", key: "eec7b08761151c858c14058c2629dee6" }
Thu Dec  6 14:37:13 [conn4] command openshift.$cmd command: { authenticate: 1, u
ser: "openshift", nonce: "665cedb4baf82b0d", key: "eec7b08761151c858c14058c2629d
ee6" } ntoreturn:1 reslen:37 0ms
Thu Dec  6 14:37:13 [conn4] query openshift.system.namespaces nreturned:3 reslen
:142 0ms
Thu Dec  6 14:37:13 [conn4] run command openshift.$cmd { create: "testcollection
" }
Thu Dec  6 14:37:13 [conn4] create collection openshift.testcollection { create:
 "testcollection" }
Thu Dec  6 14:37:13 [conn4] New namespace: openshift.testcollection
Thu Dec  6 14:37:13 [conn4] adding _id index for collection openshift.testcollec
tion
Thu Dec  6 14:37:13 [conn4] build index openshift.testcollection { _id: 1 }
Thu Dec  6 14:37:13 [conn4] external sort root: /var/lib/mongodb/_tmp/esort.1354
804633.1660751058/
Thu Dec  6 14:37:13 [conn4]   external sort used : 0 files  in 0 secs
Thu Dec  6 14:37:13 [conn4] New namespace: openshift.testcollection.$_id_
Thu Dec  6 14:37:13 [conn4]   done building bottom layer, going to commit
Thu Dec  6 14:37:13 [conn4]   fastBuildIndex dupsToDrop:0
Thu Dec  6 14:37:13 [conn4] build index done 0 records 0.001 secs
Thu Dec  6 14:37:13 [conn4] command openshift.$cmd command: { create: "testcolle
ction" } ntoreturn:1 reslen:37 1ms
...

Finally I'll connect and query the database locally to check for the presence of the new collection.

mongo --username openshift --password dbsecret data1.example.com/openshift
MongoDB shell version: 2.0.2
connecting to: data1.example.com/openshift
> show collections
system.indexes
system.users
testcollection

This is really enough to demonstrate that the MongoDataStore object is properly configured and has the ability to read and write the database.  Just for completeness I'll go one step further and create a document.

Add a Document to the testcollection


Since the testcollection is the most recently added, it should be the last one in the collections list in the Rails console Mongo::DB object.  I can check by looking at the name attribute of that collection

irb(main):007:0> store.db.collections[2].name
> "testcollection"

Now that I know I have the right one, I can add a document to it using the Mongo::Collection insert() method:

irb(main):008:0> store.db.collections[2].insert({'testdoc' => {'testkey' => 'testvalue'}})
=> BSON::ObjectId('50c0c2016892df2d56000001')

The logs show the insert like this:

Thu Dec  6 16:04:36 [conn11] run command openshift.$cmd { getnonce: 1 }
Thu Dec  6 16:04:36 [conn11] command openshift.$cmd command: { getnonce: 1 } nto
return:1 reslen:65 0ms
Thu Dec  6 16:04:36 [conn11] run command openshift.$cmd { authenticate: 1, user:
 "openshift", nonce: "19ca25a92ca483ee", key: "f7ac36d2e36a3a00a91d234a59a559e3"
 }
Thu Dec  6 16:04:36 [conn11]  authenticate: { authenticate: 1, user: "openshift"
, nonce: "19ca25a92ca483ee", key: "f7ac36d2e36a3a00a91d234a59a559e3" }
Thu Dec  6 16:04:36 [conn11] command openshift.$cmd command: { authenticate: 1, 
user: "openshift", nonce: "19ca25a92ca483ee", key: "f7ac36d2e36a3a00a91d234a59a5
59e3" } ntoreturn:1 reslen:37 0ms
Thu Dec  6 16:04:36 [conn11] query openshift.system.namespaces nreturned:5 resle
n:269 0ms
Thu Dec  6 16:04:36 [conn11] insert openshift.testcollection 0ms

And a quick CLI query to confirm that the document has been created:

mongo --username openshift --password dbsecret data1.example.com/openshift
MongoDB shell version: 2.0.2
connecting to: data1.example.com/openshift
> db.testcollection.find()
{ "_id" : ObjectId("50c0c2016892df2d56000001"), "testdoc" : { "testkey" : "testvalue" } }

Read a Document from the testcollection


In traditional database style, when you make a query, you don't get  back the single thing you asked for.  You get a Mongo::Cursor object which collects all of the documents which match your query.  Cursors respond to a next() method which does what you would think, returning each match in turn and nil when all documents have been retrieved.  The Mongo::Cursor also has a method which converts the entire response into an array. I'll use that to get just the one I want.

irb(main):035:0> store.db.collections[2].find.to_a[0]
=> #<BSON::OrderedHash:0x3fbb2b27fea4
 {"_id"=>BSON::ObjectId('50c0c2016892df2d56000001'),
 "testdoc"=>#<BSON::OrderedHash:0x3fbb2b27fd50 {"testkey"=>"testvalue"}>}>

I won't take up space showing the log entries for this query.  I know how to find them now if there's a problem.

Cleanup: Remove the testcollection


The final step in a test like this is always to remove any traces.  I can drop the whole collection with a single command.  This one I will confirm with the local CLI query, but the logs I'll leave for an exercise unless something goes wrong.

irb(main):037:0> store.db.collections[2].drop
=> true

You may notice that this was WAY too easy. Do be careful when you're working on production systems. Prepare and test backups OK?

When I look now on the CLI and ask for the list of collections, I only see two:

mongo --username openshift --password dbsecret data1.example.com/openshift
MongoDB shell version: 2.0.2
connecting to: data1.example.com/openshift
> show collections
system.indexes
system.users

Summary

In this post I showed how to access the OpenShift broker application using the Rails console.  I created an OpenShift::MongoDataStore object (using the OpenShift::DataStore factory).  I showed how to access the database from the CLI and where to find the MongoDB log files.  With these I was able to confirm that the OpenShift broker DataStore configuration was correct and that the database was operational.

References






Wednesday, December 5, 2012

Verifying the DNS Plugin using Rails Console

Each of the OpenShift broker plugins provides an interface implementation class for the  plugin's abstract behavior.   In practical terms this means that I can fire up the rails console, create an instance of the plugin class and then use it to manipulate the service behind the plugin.

Since the DNS plugin has the simplest interface and Bind has the cleanest service logs, I'm going to demonstrate with that.  The technique is applicable to the other back-end plugin services.

Preparing Logging


To make life easy I'm going to configure logging on the DNS server host so that the logs from the named service are written to their own file.

A one line file in /etc/rsyslog.d will do the trick:

if $programname == 'named' then /var/log/named.log

Write that into /etc/rsyslog.d/00_named.conf and restart the rsyslog service.  Then restart the named service and check that the logs are appearing in the right place.

If I didn't filter out the named logs, I could still use grep on /var/log/messages to extract them.

Configuring the Bind DNS Plugin


As indicated in previous posts, the OpenShift DNS plugin is enabled by placing a file in /etc/openshift/plugins.d with the configuration information for the plugin.  The name of the file must be the name of the rubygem which implements the plugin with the suffix .conf. The Bind plugin is configured like this:

/etc/openshift/plugins.d/openshift-origin-dns-bind.conf

BIND_SERVER="192.168.5.11"
BIND_PORT=53
BIND_KEYNAME="app.example.com"
BIND_KEYVALUE="put-your-hmac-md5-key-here"
BIND_ZONE="app.example.com"

When the Rails application starts, it will import a plugin module for each .conf file and will set the config file values.

The Rails Console


Ruby on Rails has an interactive testing environment.  It it started by invoking rails console from the root directory of the application.  If I start the rails console at the top of the broker application I should be able to instantiate and work with the plugin objects.

The rails console command runs irb to offer a means of manual testing.  In addition to the ordinary ruby script environment it imports the Rails application environment which resides in the current working directory. Among other things, it processes the Gemfile which, in the case of the OpenShift broker, will load any plugin gems and initialize them. I'm going to use the Rails console to directly poke at the back end service objects.

I'm going to go to the broker application directory.  Then I'll check that bundler confirms the presence of all of the required gems.  Then I'll start the Rails console and check the plugin objects manually.

cd /var/www/openshift/broker
bundle --local
....
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
rails console
Loading production environment (Rails 3.0.13)
irb(main):001:0> 

The last line above is the Rails console prompt.

Creating a DnsService Object


The OpenShift::DnsService class is a factory class for the DNS plugin modules. It also contains an interface definition for the plugin, though Ruby and Rails don't seem to be much into formal interface specification and implementation.  The plugin interface definitions reside in the openshift-origin-controller rubygem:

https://github.com/openshift/origin-server/tree/master/controller/lib/openshift

The factory classes provide two methods by convention: provider=() sets the actual class which implements the required interface and instance() is the factory method, returning an instance of the implementing class.  They also have a private instance variable which will contain a reference to the instantiating class.  When the plugins are loaded, a reference to the instantiating class is set into the factory class.

Once the broker application is loaded using the rails console I should be able to create and work with instances of the DnsService implementation.

The first step is to check that the factory class is indeed loaded and has the right provider set. Since I can just type at the irb prompt it's easy to see what's there.

irb(main):001:0> OpenShift::DnsService
=> OpenShift::DnsService
irb(main):002:0> d = OpenShift::DnsService.instance
=> #<OpenShift::BindPlugin:0x7f540dfb9ee8 @zone="app.example.com",
 @src_port=0, @server="192.168.5.2",
 @keyvalue="GwhJNLZPghbpTya2M6N+lvcLmBQx6TYbuH7j6TPyetE=",
 @port=53, @keyname="app.example.com",
 @domain_suffix="app.example.com">

Note that the class is OpenShift::BindPlugin and the instance variables match the values I set in the plugin configuration file. I now have a variable d which refers to an instance of the DNS plugin class.

The DnsService Interface


The DNS plugin interface is the simplest of the plugins.  It contains just four methods:
  • register_application(app_name, namespace, public_hostname)
  • deregister_application(app_name, namespace)
  • modify_application(app_name, namespace, public_hostname)
  • publish()
All but the last will have a side-effect which I can check by observing the named service logs and by querying the DNS service itself.

Note that the publish() method is not included in the list with side-effects.  publish() is always called at the end of a set of change calls.  It is there to accommodate batch update processing.  Third party DNS services which use use web interfaces may require batch processing. The OpenShift::BindPlugin submits changes instantly.

Change and Check


The process of testing now will consist of three repeated steps:

  1. Make a change
  2. Check the DNS server logs
  3. Check the DNS server response

I will repeat the steps once for each method. (though I'll only show a couple of samples here)

The logs are time-stamped.  To make it easier to find the right log entry, I'll check the time sync of the broker and DNS server hosts, and then check the time just before issuing each update command.

First I check the date and add an application record.  An application record is a DNS CNAME record which is an alias for the node which contains the application. Here goes:

irb(main):003:0> `date`
=> "Wed Dec  5 15:25:59 GMT 2012\n"
irb(main):002:0> d.register_application "testapp1", "testns1", "node1.example.com"
=> ;; Answer received from 192.168.5.11 (129 bytes)
;;
;; Security Level : UNCHECKED
;; HEADER SECTION
;; id = 25286
;; qr = true    opcode = Update    rcode = NOERROR
;; zocount = 1  prcount = 0  upcount = 0  adcount = 1

OPT pseudo-record : payloadsize 4096, xrcode 0, version 0, flags 32768

;; ZONE SECTION (1  record)
;; app.example.com. IN SOA

The register_application() method returns the Dnsruby::Message returned from the DNS server.  A little digging should indicate that the update was successful.

Next I'll examine the named service log on the DNS server host.

tail /var/log/named.log
...
Dec  5 15:26:41 ns1 named[11178]: client 10.16.137.216#54040/key app.example.com: signer "app.example.com" approved
Dec  5 15:26:41 ns1 named[11178]: client 10.16.137.216#54040/key app.example.com: updating zone 'app.example.com/IN': adding an RR at 'testapp1-testns1.app.example.com' CNAME

Finally, I'll check that the server is answering queries for that name:

dig @ns1.example.com testapp1-testns1.app.example.com CNAME

; <<>> DiG 9.9.2-rl.028.23-P1-RedHat-9.9.2-8.P1.fc18 <<>> @ns1.example.com testapp1-testns1.example.com CNAME
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10884
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;testapp1-testns1.example.com. IN CNAME

;; AUTHORITY SECTION:
example.com.  10 IN SOA ns1.example.com. hostmaster.example.com. 2011112904 60 15 1800 10

;; Query time: 3 msec
;; SERVER: 192.168.1.11#53(192.168.1.11)
;; WHEN: Thu Dec  5 15:28:41
;; MSG SIZE  rcvd: 108

That's sufficient to confirm that the DNS Bind plugin configuration is correct and that updates are working. In a real case I'd go on and check each of the operations. for Now I'll just delete the test record and go on.


d.deregister_application "testapp1", "testns1"
=> ;; Answer received from 192.168.5.11 (129 bytes)
;;
;; Security Level : UNCHECKED
;; HEADER SECTION
;; id = 26362
;; qr = true    opcode = Update    rcode = NOERROR
;; zocount = 1  prcount = 0  upcount = 0  adcount = 1

OPT pseudo-record : payloadsize 4096, xrcode 0, version 0, flags 32768

;; ZONE SECTION (1  record)
;; app.example.com. IN SOA



dig @ns1.example.com testapp1-testns1.app.example.com CNAME
; <<>> DiG 9.9.2-rl.028.23-P1-RedHat-9.9.2-8.P1.fc18 <<>> @ns1.example.com testapp1-testns1.example.com CNAME
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 50598
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;testapp1-testns1.example.com. IN CNAME

;; AUTHORITY SECTION:
example.com.  10 IN SOA ns1.example.com. hostmaster.example.com. 2011112904 60 15 1800 10

;; Query time: 3 msec
;; SERVER: 192.168.5.11#53(192.168.5.11)
;; WHEN: Thu Dec  5 15:31:20
;; MSG SIZE  rcvd: 108

This is what a negative response looks like. There's a question section but no answer section.
Things are back where I started and I can move on to the next test.

Resources

Monday, December 3, 2012

OpenShift Broker Configuration and Log Files

There are a lot of moving parts in an OpenShift Broker service. There are the four back-end services to start with. Then there's the front end HTTP daemon and the Rails broker application. There's SELinux security and the Passenger Rails accelerator service. Each of these needs some kind of configuration which may need some tweaking. Each of them also has either a specific log file or some other output somewhere that can be used for status checks and diagnostics.

In this post I'm going to run down a list of these configurations and logs and the service components they relate to.  Each of these gets some attention in the Build-Your-Own wiki instructions.

Configuration Directories


The OpenShift broker service (even if you set aside the back-end services) is an amalgam of components.  Each of these may have some customization for for the final working environment.  Each is also an opportunity for something to get broken or tweaked.

Without some understanding of the interactions between the components the set of configurations might seem unfathomable.  Even with some understanding it can be complex, but it does not need to be overwhelming.

These are the places where configuration files are known to lurk.  

OpenShift Broker Configuration Directories
DirectoryPurposeDescription
/etc/openshift Master Location Master configuration directory for all openshift related services
/etc/openshift/plugins.d Broker Plugin Configuration This is where plugin configuration files are placed. These files select the plugins for each back end service. They also contain customization (service location, authentication information etc).
/var/www/openshift/broker Rails Application Root This directory contains the Rails application which is the OpenShift Broker service. At the top level are the Gemfile and Gemfile.lock which control the application rubygems.
/var/www/openshift/broker/config/environments Rails configuration This directory contains the Rails application "environments". Each file here corresponds to a possible run mode for the OpenShift broker service. See also /etc/openshift/development
/var/www/openshift/broker/config/httpd/conf.d Broker HTTPD This directory contains the broker httpd configuration files.
/etc/httpd/conf.d Front end HTTPD This directory is the standard configuration location for the front-end Apache2 daemon.

If you're poking around wondering what goes on behind the scenes and how it's controlled, these are the places to start.

Configuration Files


Each of the locations above can contain a number of different and only marginally related configuration files. The list below contains all of the files that appear to need special attention of some kind during service configuration.  I don't try to mention every possible setting or switch here.  I'm just trying to give you an idea of what you might find in each one.  See the Build-Your-Own wiki page and the official OpenShift Enterprise service documentation for details.

This file defines a number of parameters for the service. This is the development configuration.


OpenShift Broker Configuration Files
FileFormatDescription
/etc/openshift/broker.conf Shell Key/Value This file defines a number of parameters for the service. This is the production configuration.
/etc/openshift/broker-dev.conf Shell Key/Value
/etc/openshift/development none When this file exists the broker service will start in dev mode, using the broker-dev.conf and developement.rb files.
/etc/openshift/server_priv.pem PEM/RSA This key file is used to authenticate optional services.
Generated by openssl
/etc/openshift/server_pub.pem PEM/RSA This key file is used to authenticate optional services
Generated by openssl
/etc/openshift/rsync_id_rsa.* SSH/RSA This key file pair is used to authenticate when moving gears from one node to another.
Generated by ssh-keygen
/etc/openshift/plugins.d/*.conf Shell Key/Value These are magic files. The file name must match the name of a local rubygem and end with .conf.The gem is loaded and the configuration file is parsed and included by the plugin gem
These plugins are loaded as part of the Rails start up process, as specified in the Gemfile
/var/www/openshift/broker/Gemfile Rails/Bundler This file defines the rubygem package requirements for the broker application.
It is used by the bundle command to generate the Gemfile.lock
/var/www/openshift/broker/Gemfile.lock Rails/Bundler This file defines the actual rubygem packages which fullfill the broker application requirements on this system. It is regenerated each time the openshift-broker service is restarted.
/var/www/openshift/broker/httpd/conf.d/*.conf Apache Pick one of the auth conf samples.
This file controls the broker service user identification/authentication when the "remote user" plugin is selected. The "remote user" plugin delegates the authentication to the httpd service which can then use any auth module.
Currently there are example config files for Basic auth, for LDAP and Kerberos.
/etc/openshift/htpasswd Apache If the broker httpd uses the Basic Auth module, this file contains the username/password pairs for the broker service.
/var/www/openshift/broker/config/environments/production.rb Ruby/Rails This file defines the production configuration values for the OpenShift broker service. Debugging stack traces are suppressed.
/var/www/openshift/broker/config/environments/development.rb Ruby/Rails This file defines the development configuration values for the OpenShift broker service. Debugging stack traces are returned in line.
/etc/httpd/conf.d/000000_openshift_origin_broker_proxy.conf Apache2 This file defines the proxy configurations for the Openshift broker and console services. It also sets the ServerName for the system as a whole
/etc/mcollective/client.cfg YAML This file defines the Mcollective client communications parameters. It connects to the underlying message service.  It also can indicate where the client activity is logged and control the logging level.

Broker Plugin Configuration Files


The files in /etc/openshift/plugins.d are a bit magical.  They are loaded when the Gemfile is processed as the Rails application starts.  Each file in that directory that ends in .conf will be processed.  The file name (minus the .conf extension must be the name of a locally installed rubygem.  The named gem is loaded and the config  file is then processed by the gem.  You can't just create a new config file there and put config values in it.  Well you can but it will cause your broker to fail.


Log Files



If things aren't behaving as you think they should, or if you just want to get a sense of how things should look, these are places you can check.

OpenShift Log Files
FileSourceDescription
/var/log/messages syslog System wide log file
/var/log/mcollective-client.log MCollective client Mcollective log file. Location defined in client.cfg. Log level also defined.
/var/log/httpd/access_log httpd Front end proxy httpd
/var/log/httpd/error_log httpd Front end proxy httpd
/var/log/httpd/ssl_access_log httpd Front end proxy httpd
/var/log/httpd/ssl_error_log httpd Front end proxy httpd
/var/log/secure syslog System access
/var/log/audit/audit.log syslog SELinux activity
/var/www/openshift/broker/log/development.log Rails Logs from development mode
/var/www/openshift/broker/log/production.log Rails Logs from production mode
/var/www/openshift/broker/httpd/logs/access_log Apache 2 Broker access
/var/www/openshift/broker/httpd/logs/error_log Apache 2 Broker errors