Monday, August 18, 2014

Hey! I'm Back (and the Cloud is Bigger than Ever)

After a few months trying to do Businessy things that I don't think I'm very good at it looks like I could be back in the software dev/sysadmin space again for a while.

You'll notice a name change on the blog: It's no longer just about OpenShift.  Red Hat is getting into a number of related and extremely innovative and promising projects and trying to make them work together. I'm working to assist on a number of these projects where an extra hand is needed and I get to learn all kinds of cool stuff in the process.

The projects all revolve around one form of "virtualization" or another and all of the efforts are on taking these tools and using them to create enterprise class services.


OpenStack is essentially Amazon Web Services(r) for on-premise use.  To put it another way, it's an attempt to mechanize all of the functions of all of the groups in a typical enterprise IT department: networking, data center host management, OS and application provisioning, storage management, database services, user management and policies and more.

Merely replacing all of the people in an organization that do these things would be boring (and counterproductive). What OpenStack really offers is the ability to push control of the resources closer to the real user, offering self-service access to things which used to require coordination between experts and representatives from a number of different groups with the expected long lead times.  The ops people can focus on making sure there are sufficient resources to work, and the users, the developers and the applications admins can just take what they need (subject to policy) to do their work.

Now that's nice for the end user.  They get a snazzy dashboard and near-instant response to requests.  But the life of the sysadmin hasn't really changed, just the parts they run.  The sysadmin still has to create, monitor and support multiple complex services on real hardware. She also can't easily delegate the parts to the old traditional silos.  The sysadmin can't be just concerned with hardware and OS and NIC configuration.  The whole network fabric (storage too) all has to be understood by everyone on the design, deployment and operations team(s).  Message to sysadmins: Don't worry one bit about job security, so long as you keep learning like crazy.


Docker (and more generally "containerization") is the current hot growth topic.

Many people are now familiar with Virtual Machines.  A virtual machine is a process running on a host machine which simulates another (possibly totally different) computer.  The virtual machine software simulates a whole computer right down to mimicking hardware responses.  From inside the virtual machine it looks like you have a complete real computer at your disposal.

The downside is that VMs require the installation and management of a complete operating system withing the virtual machine.  VMs allow isolation but have a lot of heft to them.  The host machine has to be powerful enough to contain whole other computers (sometimes many of them) while still doing it's own job.

Docker uses some newish ideas to offer a middle ground between traditional multi-tenent computing, where a number of unrelated (and possibly conflicting) services run as peers on a single computer and the total isolation (and duplication) that VMs require.

The enabling technology is known as cgroups and specifically kernel namespaces.  The names are unimportant really.  What namespaces do is to allow the host operating system to provide each process with a distinct carefully tuned view of the parts of the host that the process needs to do its job. The view is called a container and any processes which run in the container can interact with each other as normal.  However they are entirely unaware of any other processes running on the host.  In a sense containers act as blinders, protecting processes running on the same host from each other by preventing them from even seeing each other.

Docker is a container service which standardizes and manages the development, creation and deployment of the containers and their contents in a clear and unified way.  It provides a means to create a single-purpose container for, say, a database service and then allows the


While Docker itself is cool, it really focuses on the environment on a single host and on individual images and containers.  Kubernetes is a project initiated at Google but adopted by a number of other software and service vendors.  Kubernetes aims to provide a way for application developers to define and then deploy complex applications composed of a number of Docker containers and potentially spread over a number of container hosts.

I think Kubernetes (or something like it) is going to have a really strong influence on the acceptance and use of containerized applications.  It's likely to be the face most application operations teams see on the apps they deploy.  It's going to be critical both for both the Dev and Ops elements because it's going to be critical to the design and deployment of complex applications.

As a sysadmin this is where my strongest interest is.  Docker and Atomic are parts, Kubernetes is the glue.

Project Atomic

And where do you put all those fancy complex applications you've created using Docker and defined using Kubernetes?  Project Atomic is a Red Hat project to create a hosting environment specifically for containerized applications. 

Rather than running (I mean: installing, configuring and maintaining) a general purpose computer running the Docker daemon and a Kubernetes agent and all of the other attendant internals, Project Atomic will provide a host definition tuned for use as a container host.   A general purpose OS installation often has a number of service components which aren't necessary and may even pose a hazard to the container services.  Project Atomic is building an OS image designed to do one thing: Run containers.

Atomic is itself a stripped down general purpose OS.  It can run on bare metal, or on OpenStack or even on public cloud services like AWS or Rackspace or Google Cloud.


It's been a long time since I worked in a system level language.  Go (or golang to distinguish it from the venerable Chinese strategy board game) is a new environment created by a couple of the luminaries of early Unix,  Robert Griesemer, Rob Pike, and Ken Thompson at Google.  It aims to address some of the shortcomings of C in the age of distributed and concurrent programming, neither of which really existed when C was created.

Docker and several other significant new applications are written in Go and it's catching on with system level developers.  I quickly bumped up on my scripting language habits when I started getting into Go and I was reminded of why system languages are still important.  It's refreshing to know I can still think at that level.

I think Go is going to spread quickly in the next few years and I'm going to learn to work with it along with the common scripting environments.

Look Up: There's more than one kind of cloud.

In the past I've been focused on one product and one aspect of Cloud Computing.  Make no mistake, Cloud Computing is still in it's infancy and we're still learning what kind of thing it wants to grow up into.  The range of enterprise deployment models is getting bigger.  Applications can be delivered as traditional software, as VM images for personal or enterprise use (VirtualBox and Vagrant to OpenStack to AWS) and now as containers which sit somewhere in between.  Each has its own best uses and we're still exploring the boundaries.

So now I'm going to branch out too and look at each of these and look at all of them.  My focus is still going to be what's going on inside, the place where you can stick your hand in and lose fingers.  Lots of other people are talking about the glossy paint job and the snazzy electronic dashboard.  I'll leave that to them.

Tut Tut... it looks like rain....(but I like the rain)



  1. Cloud Computing Online Training, ONLINE TRAINING – IT SUPPORT – CORPORATE TRAINING The 21st Century Software Solutions of India offers one of the Largest conglomerations of Software Training, IT Support, Corporate Training institute in India - +919000444287 - +917386622889 - Visakhapatnam,Hyderabad Cloud Computing Online Training, Cloud Computing Training, Cloud Computing, Cloud Computing Online Training| Cloud Computing Training| Cloud Computing| "Courses at 21st Century Software Solutions
    Talend Online Training -Hyperion Online Training - IBM Unica Online Training - Siteminder Online Training - SharePoint Online Training - Informatica Online Training - SalesForce Online Training - Many more… | Call Us +917386622889 - +919000444287 -

  2. Thanks for sharing this informative blog..If anyone want to get Cloud Computing Training in Chennai reach FITA academy located at Chennai, Velachery.