Posts tagged with “”

Enhancing SuperDuper Backups

I’ve been using SuperDuper for a few years now. It’s saved my tuchus a few times and provided tremendous convenience many times. In the last few months, though, I’ve noticed a few things that annoy me after the backup is complete:

  1. Growth
  2. Web Server Conflicts

Growth

I archive to a sparse bundle and those only grow. After backing up a few times, that bundle can get insanely huge. Over time, they actually take up more space than they need, but they can be compacted. I have two separate, bootable backups that I store on a single 320GB external hard drive, so this growth can quickly consume a disk that size.

Web Server Conflicts

I use an Apache instance that I installed via MacPorts. To stop and start that instance, I added the location of its apachectl executable to my path so that I can stop and start the web server the same way that I’m used to doing on my Linux servers. To ensure no confusion, I disable execute permissions on OS X’s native apachectl script. The problem is that OS X expects its apachectl script to be executable and its repair permissions operation makes this script executable.

Since I ask SuperDuper to repair permissions before every backup, the built in apachectl script becomes executable and if/when I have to bounce my web server, the command sudo apachectl restart actually starts the native Apache instance which creates conflicts (they share the same port) and all sorts of preventable mayhem that used to take me a while to track down until I got used to recognizing the symptoms.

Remediation

One of the things I love about SuperDuper is that is provides hooks into its process. These hooks allow me to write a script and tell SuperDuper to execute that script when the backup is complete. So I did.

#!/usr/bin/ruby
require 'logger'
log_file = '/Users/me/Desktop/post-backup.log'
File.delete( log_file ) if File.exists? log_file
log = Logger.new( log_file )
# 
# Compact the MWF backup volume if it's not currently mounted
# 
if !File.exists? "/Volumes/mbp17.daily.mwf"
  # system "hdiutil compact '/Volumes/SuperDuper Backups/mbp17.daily.mwf.sparsebundle' > #{log_file}"
  log.info 'Finished compacting mbp17.daily.mwf.sparsebundle'
end 
# 
# Compact the TTS backup volume if it's not currently mounted
# 
if !File.exists? "/Volumes/mbp17.daily.tts"
  system "hdiutil compact '/Volumes/SuperDuper Backups/mbp17.daily.tts.sparsebundle' > #{log_file}"
  log.info 'Finished compacting mbp17.daily.tts.sparsebundle'
end 
# 
# Ensure that the native OSX Apache install remains
# disabled after permissions are repaired.
# 
log.info 'Disabling execute permissions for OS X Apache components'
system 'chmod a-x /usr/sbin/httpd'
system 'chmod a-x /usr/sbin/apachectl'
system 'chmod a-x /usr/sbin/apxs'

As I mentioned above, I have two backups that run on a schedule. One runs Monday, Wednesday and Friday evenings and the other runs on Tuesday, Thursday and Friday evenings. The backup is currently running cannot be compacted because its volume is mounted. That explains the test for whether a backup image is mounted before attempting to compact it.

Because this is a new process, I create a log on my desktop and write a few things to it just in case there’s a problem. Because it’s on my desktop, my (mild) OCD reminds me to look at it and delete it regularly (read:daily). I’ll remove the log prints once I feel comfortable that everything is doing what it’s supposed to be doing.

Finally, I re-disable the executable permission on the native apachectl script as well as a few other executables that belong to OS X’s built-in Apache instance.

Sync MySQL Databases

I run three different environments for robwilkerson.org. I have local development environment where I update Chyrp, the blogging software that runs the site, tweak parts of my theme and introduce new modules to the configuration. I also have a staging environment where I ensure that changes made in dev look and work okay in an environment that closely resembles my final environment, production.

Something I’ve long wanted to do is to keep the staging and production MySQL databases sync’d up so that I can create an even closer resemblance between the environments and get a better feel for the impact of the changes I make as they move up the stack. This morning I finally set about implementing this process. The process itself is pretty straightforward and looks like this:

  1. Export the production database to a SQL script
  2. In the exported script, replace any references to the production database name with the staging database name
  3. Execute the script against the staging database

The script to execute that process looks like this:

mysqldump -umyusername \
           -pmypassword \
           --opt \
           --no-create-db \
           --complete-insert \
           --databases production-db-name | \
sed s/production-db-name/staging-db-name/ | \
mysql -umyusername \
      -pmypassword \
      -D staging-db-name

Finally, I created a cron job to run this command every night at midnight. In my setup, the production and databases exist on the same host. If they didn’t, I’d need to add the -h flag to at least one of the mysql commands. Similarly, this command is executed on the same machine as those databases which eliminates the need for the -h flag on either command.

I wrote this primarily to supplement my own memory, but I offer it to you at no cost. I’m not a MySQL DBA, nor do I care to become one in the near future; if there’s an easier or better way, I’d love to hear about it.