Using Debian's (.deb files) to package and deploy software across a large number of machines is a really powerful tool. It can be a bit esoteric (what isn't in unix land?), but once you get the general idea for how it works, you will be amazed at how cool it is. Obviously there is a bunch systems out there for doing this packaging/install process, so why pick Debian over something else?
Pros:
Pros:
- The online documentation is solid and easy to find.
- It employs a set of best practices according to well defined documentation...
- An extremely thorough lint mechanism that checks that the debian you have built is valid and fits into those best practices.
- No DSL. Doesn't require learning anything more than bash/make/ant or hiring expensive consultants.
- The process around installing/updating debians on machines can be easily scripted with bash.
- Can be easily tested on a local vmware ubuntu image.
- Security, md5sum's of all files, gpg signed .deb files and keys and automatic validation.
- The distribution model is appropriate for my iterative development style. Add a config file to a server that points to the branch (trunk/iteration) you want to install and the system will automatically choose the latest version of the package and install it.
- It is a pull based model. You log into the server you want to install software on, execute aptitude/apt-get, and it will connect via http to our central distribution host and download the latest version of the requested software from there. No need for jumphosts and the signed files provides adequate levels of security.
- There is a fairly sophisticated dependency mechanism and resolution system.
- It is easily integrated with our Ant/Hudson build system and doesn't require a mess of servers/services to be running either on the build server or the servers we deploy to.
- There is no limitations to what you can do on the system you are installing onto.
- All of the configuration is done through a set of clearly defined small text files that are checked into subversion with each project.
- It is easy to ask for user input and process that information as part of the installation process.
- It really only works if you are using some flavor of Debian (like Ubuntu) on your servers. Something I prefer, so less of an issue for me.
- Like anything new, it can be complicated to get up to speed on and takes analyzing existing debian files to understand how others choose to implement things. On the other hand, this is also a benefit... pick a package similar to what you want to install, see how someone else did it and then replicate that yourself.
Tools that I considered are:
- cfengine - the old complicated beast. typically overkill. has its own DSL. Many processes running.
- puppet - Less about distributing software and more about making sure all systems are configured correctly. Documentation is questionable. Requires learning a DSL. Not really necessary if you use kickstart for initial bootstrap and make it easy to deploy code to servers because if you need the machine reconfigured, you fix kickstart, wipe the box and reinstall. Requires a daemon on the box.
- chef - Documentation is questionable. Requires learning a DSL + ruby and writing recipes for everything, no lint process, server is a mess of complicated brittle projects (so much so that opscode is pushing their 'platform' as the way to go), requires daemon on the box.
- fabric - A closer fit as python is better than ruby for this, but I prefer the pull based method of distribution over using ssh keys.
- capistrano - Fairly ruby-centric focus. Simple DSL. Push based system.
In future posts, I'll go through and talk about how .deb files are developed and deployed.