Second day of my Ant template lead up, second post. Hello, inertia. Nice to meet you. Check out the prelude, if you haven’t already.
Although my script is templated, it’s templated for my environment – both corporate and personal. Because I’m fortunate enough to be able to guide our corporate infrastructure, it’s been molded in the image of my personal environment. The script has certain expectations that, within the context of that infrastructure, are entirely fair and valid. Outside of that infrastructure, those expectations may not be valid. As such, I think it might be helpful to know a little bit about my environment; it may assist in reading the scripts and in understanding where changes could or should be made.
As is the case in most shops I’ve ever seen or worked in, I have three clusters: development, staging and production. There are also infrastructure and configuration aspects that are shared. We’ll start with the “universal” components.
To dispense with the generics, the environment is a LAMP stack. Sure, there are some outlier projects that run other packages, but those can be ignored in this context.
The File System
In addition to a directory reserved for shared services that has been added to our include_path in %(technical)php.ini, we have any number of project roots (one per project, as you might expect). Project roots are organized like so:
The _meta/ directory contains any versioned files and code that are not required at runtime. For example, documentation and the build script itself. In fact, one of the things the build does is delete this directory once the deployment is complete. This directory does contain files that may be used during deployment like SQL files to create a database or shell scripts to aggregate any number of actions.
The classes/ directory contains the project’s supporting class files. These aren’t part of the web root, but are added to the include_path for the project. This keeps business logic outside of the web root. Handy for all of that top secret, for-your-eyes-only work I do. Heh.
The html/ directory is the project web root and contains all of the runtime files that need to be available to the web server.
The maint/ directory contains resources to display a maintenance screen. Slightly oversimplified, if the file /maint/maintenance.htm exists, it will be displayed. Period. One of the first tasks the build script performs is to enable the maintenance screen so that users aren’t greeted with a big server area.
Most projects I’ve been involved with involve some degree of what I call user-contributed content. What I mean is that users can upload files to the system. Sometimes that means a lot of files. I consolidate that content in a single directory, named bin/, that sits in the web root. To help manage and serve that content, the bin/ directory and its subdirectories are NAS mounts. Every project has one and it’s created when the project is created on the server. For developers, it just exists.
The build must adjust for the fact that it can’t be deleted, nor can its permissions be altered. Hence its mention here.
I do the bulk of my work locally as, I believe, should all developers. It just eliminates so many potential problems. It’s also a good learning experience. I think all web developers should be able to perform at least basic debugging and maintenance tasks on their web and database servers, etc. That said, I do have a development cluster that serves to meet needs that can’t be met on my laptop.
My development cluster includes a deployment server where developers can test, what else? The build script they’ve tailored to their own application and its deployment needs. It’s also available to test integration with third party systems that may not be available to the desktop environment, to provide shared access where business owners and stakeholders can review progress, etc.
In the development cluster, all machines have the following naming convention:
For example, buildsrv.dev.robwilkerson.org.
Staging & Production
The goal, of course, is to have staging mirror production as closely as possible. I think this is the goal in every shop and it’s no different here. We do a pretty decent job, I think, so I feel comfortable lumping these two environments for the purpose of discussion.
Machines in these clusters are predictably named according to the following convention:
I’m admittedly not as anal as I should be about the naming convention (which is surprising for me). “dev” is sometimes “devel”, “prd” is sometimes “prod”, etc. The script covers all of my typical variations, as you’ll see.
In order to lock down these machines, we have a specialized build user with developer-level access to touch the areas of the file system required by builds. That user has specialized sudo privileges that allow it to do whatever it needs to do within the context of the build. Similarly, we have a debug user with those same privileges. Once the build is executed on one of these clusters, developers may need to physically access the machine in order to debug any problems. This user is locked by default and rarely unlocked, but is available if the need arises.
So that’s the world my template lives in. Hopefully, by understanding the differences between my world and yours, it will be easier to make any necessary modifications. Next up: the deployment process that my script must facilitate. Stay tuned.