Add to Google Reader Extension for Safari 5

I’ve long envied the polished appearance of Safari from the safe distance of my Firefox usage. On several occasions I’ve tried to set Safari as my default browser, but it never stuck. There were just too many functional niceties in Firefox that I couldn’t manage to get along without. That changed last week with the introduction of Safari 5.

The web inspector has finally caught up with Firebug —at least enough so as to make it a very usable alternative—and, at long last, extensions. Extensions! The availability of extension support—I mean real support, not that goofy, half-assed SIMBL hackery that was effective, but…ugh—opens the door to solving a lot of personal workflow woes that I’ve had in the past. Throw in a customized application hot key (Cmd+K to access the search box directly the way I could in Firefox) and it’s almost just like using Firefox except faster, prettier and without the memory suck (so far).

In an effort to solve one of my major issues with Safari, I found an extension that hijacks the RSS button that appears in Safari’s address bar when a feed is detected and redirects to Google Reader. I loved that the functionality was completely unobtrusive. No new buttons, no badges. The extension took a UI element that already existed and was useful, but repurposed it to be even more useful to me. Perfect. The implementation stopped a bit short of what I was looking for, but the developer, Chupa, was good enough to make his source code available on Github. I forked his repository and made the changes I wanted.

If you’re looking for an extension that will add a feed directly to Google Reader and bypasses the iGoogle/Reader option page unless you specifically enable an option that lands you there, you might like my version of Chupa’s Add to Google Reader extension for Safari 5. You can download and install it directly or you can check out the source code in my Codaset project.

Install memcached on OSX via MacPorts

Evidently I’ve become too accustomed to MacPorts installs being just a little too easy to get working. With virtually every install I’ve done, it’s as simple as typing port install port-name (plus variants as desired). Today it wasn’t and took me far longer than it should have to track down the problem.

Having exactly zero experience with memcached, I set out to get it installed for use as a lightning quick quasi-message repository. I was a little surprised at how few instructions I found for installing the necessary components on OS X via MacPorts, but I managed to cobble together what I needed from various Linux instructions and configuration bits. I will now share the fruits of my labor with you:

 # Install the executable
 $ sudo port install memcached

# Install the bindings for PHP5 $ sudo port install php5-memcached # Verify that the executable exists in your path $ which memcached /opt/local/bin/memcached # Configure memcached to execute on startup, if desired sudo launchctl load -w /Library/LaunchDaemons/org.macports.memcached.plist # Start memcached for the current session memcached -d -m 24 -p 11211

That’s the “easy” part that I managed to get through pretty quickly. Unfortunately, it didn’t work. I wrote a simple, stupid PHP script to test and got a messaged that Class ‘Memcache’ not found. It seems that MacPorts installs all of the necessary files and even creates an Ubuntu-like, separate ini file that is included by php.ini to load the memcached extension. That file, appropriately named memcached.ini, loads the memcached extension:

extension=memcached.so

This is exactly the way I’d seen it written in each of the tutorials I’d found online, so I spent some time investigating other possibilities before I came back to it.

The problem, as it turns out, is that MacPorts doesn’t install the shared object file in its extensions/ directory. Instead, it stores it in a cryptically named subdirectory.

To get memcached fully operational I suppose you could move/copy the shared object to /opt/local/lib/php/extensions/, but I chose to edit the memcached.ini file to include the full path to the shared object:

extension=/opt/local/lib/php/extensions/no-debug-non-zts-20060613/memcached.so

I should add that I did try creating a symlink, but that didn’t work for me. Rather than spend any time figuring out whether it was a me problem, I decided to take the path of least resistance and just identify the fully qualified file path.

Restart Apache and your simple, stupid test script should work without error. For whatever it’s worth, here’s my simple, stupid script:

$memcached = new Memcached();
 $memcached->addServer( ‘localhost’, 11211 );

echo “

Memcached version:

“; new PHPDump( $memcached->getVersion() ); exit;

I’m using the php52 package (5.2.13), but I didn’t see anything to indicate that this wouldn’t work with 5.3 as well.

Share an Apache Config With Dropbox

Like many, I run local development environments. I have no love for a shared development environment. Also like many, I split time between two computers—one at home and another at the office. Finally, still clutching the “like many” mantra, my work-life balance kind of sucks. My vocation is also my avocation, so when I’m working on something interesting, it follows me around from location to location, computer to computer with no regard for this mythical concept of balance.

I’ve always had different systems for work and play and it’s no secret that I’m a huge fan of Dropbox, so sharing what I need to and can has always been part of my setup. In the past, though, my systems have always been heterogeneous; a Mac at home, Windows or Linux at the office. Because the environments were different enough, sharing was often a rudimentary effort that involved multiple variations of a file that was optimized for its runtime. Still useful for access and versioning, but there’s no meaningful sharing going on there.

These days both of my machines are Macs these days (yay!) and both run Apache installed via MacPorts, so a few months back I decided it was time to share properly. My httpd.conf file was already in my Dropbox, as were a few projects that I like to have access to on either machine so I expected this to be easy. And it was.

Except that it wasn’t. The sand in the gears is that my username on each machine is different. That makes my home directory different on each machine and Macs, more so that either Windows or Linux, really encourage users to keep everything in their home directories. To some degree, that last statement is my own projection—I realize that it’s possible and even easy to install and store files anywhere—but I’ve really liked keeping everything in that tight of a grouping and wanted to continue doing so. That left me with the problem that I couldn’t hard code my paths.

Before we go on, I should outline my configuration at a very high level. I do everything web-related in virtual hosts. The reasons for doing so are beyond the scope of this post, but suffice to say that it’s something I think every developer should be doing. The fact that I still see developers working in circa-1998 directories beneath a single web root makes me crazy.

Anyway, back to the sharing. My Dropbox is in the standard location at ∼/Dropbox and my development environments are all in ∼/Development/domains. Similarly, for convenience, I keep my virtual host configs in individual files that I can easily include or uninclude. Those that I want to be able to access on either machine are stored in my Dropbox (∼/Dropbox/Application Support/apache/conf.d) and those that I don’t are stored in my Application Support directory (∼/Library/Application Support/MacPorts/apache/conf.d).

That’s a lot of resources tucked neatly into a directory that’s different on each machine that I want to share across, but fortunately Apache understands environment variables, so I just tweaked my httpd.conf and shared virtual host config files. I replaced each path that explicitly referenced my home directory with ${HOME}. For example:

Include "${HOME}/Dropbox/Application Support/apache/conf.d/*.conf"
Include "${HOME}/Library/Application Support/MacPorts/apache/conf.d/*.conf"

Once complete, I bounced Apache and everything worked.

Except that it didn’t. You didn’t really think it’d be that easy, did you?

Some time later, after a reboot, I noticed that Apache didn’t start automatically like it always had in the past. I don’t reboot often, so my Apache config changes were ancient history; I just chalked it up to a hiccup, started Apache manually and went on with my day. Eventually, in spite of the infrequency of reboots, I came to recognize a pattern. Something was wrong.

This post has become longer than I intended, so I’ll cut to the chase. The problem is that, at boot, Apache isn’t starting as me, so it doesn’t understand the ${HOME} directory I told it to use. Once booted, that’s not a problem, so manual starts worked just fine. I tried several solutions and asked questions related to this on StackOverflow here, here and here (in the order of their asking). Eventually I had to settle for a Linux-like file system config coupled with symlink usage to maintain my OS X consolidation.

I already had my Apache config file (/opt/local/apache2/conf/httpd.confJ) symlinked to a physical file in my Dropbox, so that was fine. Next I needed to access my shared and local virtual host config files, so I created /opt/local/apache2/conf.d. Inside of that directory, I created two symlinks. local pointed to my non-shared virtual host files in ∼/Library/Application Support/MacPorts/apache/conf.d and shared pointed to the shared config files in ∼/Dropbox/Application Support/apache/conf.d.

Next I needed to be able to access my development environments from outside of my home directory. I chose the Linux-like path of /var/www. I created a symlink such that /var/www pointed to ∼/Development/domains.

Finally, I just updated my Apache config so that it loaded the virtual host config files using a relative path:

Include "conf.d/*.conf"
Include "conf.d/*.conf"

And updated all of my virtual host configurations so that all resources were being accessed through /var/www instead of my home directory.

Git Tip: Ignore Changes to Tracked Files

Every once in a while, I find myself working on a project that forces me to modify key files—often config files—in order to get it running locally. In those cases, the last thing I want to do, for a number of reasons, is to commit those changes. That’s hard to do, though, since I regularly use git add . and/or git ci -a to commit everything I’ve changed. Make enough changes in enough files that you don’t want to commit and these changes begin to cause as many problems as they solve.

As is so often the case, it seems, Git comes to the rescue with its update-index command. Reading the documentation, it’s not really intended for this purpose, but its effectiveness as a “coarse file-level mechanism to ignore uncommitted changes in tracked files” is recognized. To apply it, simply make a change to a committed file, say, database.yml and execute git status. Git should report the modified file. Since we don’t want to commit, we don’t want to see this listed as a modified file until the end of time and we can’t ignore it (because it’s already being tracked), we need to tell Git to assume the file is not changed.

git update-index --assume-unchanged path/to/database.yml

I’ve been using this command since I learned of it a few weeks ago and it works perfectly for this use case. Inevitably, though, a question will arise:

What files have I marked this way?

Since those files will no longer appear in the modified list, can’t be easily found in a .gitignore file or exposed by removing a .gitignore file, there will eventually be a need to know this. Maybe you’re trying to get another instance running or maybe you’re just the curious sort and you’ve forgotten. Like many things in Git-land, the functionality exists, but is far from obvious. I asked on StackOverflow and Andrew Aylett provided the answer I was looking for.

If you ever find yourself needing to know, this command will display the files that have been marked as —assumed-unchanged.

git ls-files -v | grep -e "^[hsmrck]"

Enhancing SuperDuper Backups

I’ve been using SuperDuper for a few years now. It’s saved my tuchus a few times and provided tremendous convenience many times. In the last few months, though, I’ve noticed a few things that annoy me after the backup is complete:

  1. Growth
  2. Web Server Conflicts

Growth

I archive to a sparse bundle and those only grow. After backing up a few times, that bundle can get insanely huge. Over time, they actually take up more space than they need, but they can be compacted. I have two separate, bootable backups that I store on a single 320GB external hard drive, so this growth can quickly consume a disk that size.

Web Server Conflicts

I use an Apache instance that I installed via MacPorts. To stop and start that instance, I added the location of its apachectl executable to my path so that I can stop and start the web server the same way that I’m used to doing on my Linux servers. To ensure no confusion, I disable execute permissions on OS X’s native apachectl script. The problem is that OS X expects its apachectl script to be executable and its repair permissions operation makes this script executable.

Since I ask SuperDuper to repair permissions before every backup, the built in apachectl script becomes executable and if/when I have to bounce my web server, the command sudo apachectl restart actually starts the native Apache instance which creates conflicts (they share the same port) and all sorts of preventable mayhem that used to take me a while to track down until I got used to recognizing the symptoms.

Remediation

One of the things I love about SuperDuper is that is provides hooks into its process. These hooks allow me to write a script and tell SuperDuper to execute that script when the backup is complete. So I did.

#!/usr/bin/ruby
require 'logger'
log_file = '/Users/me/Desktop/post-backup.log'
File.delete( log_file ) if File.exists? log_file
log = Logger.new( log_file )
# 
# Compact the MWF backup volume if it's not currently mounted
# 
if !File.exists? "/Volumes/mbp17.daily.mwf"
  # system "hdiutil compact '/Volumes/SuperDuper Backups/mbp17.daily.mwf.sparsebundle' > #{log_file}"
  log.info 'Finished compacting mbp17.daily.mwf.sparsebundle'
end 
# 
# Compact the TTS backup volume if it's not currently mounted
# 
if !File.exists? "/Volumes/mbp17.daily.tts"
  system "hdiutil compact '/Volumes/SuperDuper Backups/mbp17.daily.tts.sparsebundle' > #{log_file}"
  log.info 'Finished compacting mbp17.daily.tts.sparsebundle'
end 
# 
# Ensure that the native OSX Apache install remains
# disabled after permissions are repaired.
# 
log.info 'Disabling execute permissions for OS X Apache components'
system 'chmod a-x /usr/sbin/httpd'
system 'chmod a-x /usr/sbin/apachectl'
system 'chmod a-x /usr/sbin/apxs'

As I mentioned above, I have two backups that run on a schedule. One runs Monday, Wednesday and Friday evenings and the other runs on Tuesday, Thursday and Friday evenings. The backup is currently running cannot be compacted because its volume is mounted. That explains the test for whether a backup image is mounted before attempting to compact it.

Because this is a new process, I create a log on my desktop and write a few things to it just in case there’s a problem. Because it’s on my desktop, my (mild) OCD reminds me to look at it and delete it regularly (read:daily). I’ll remove the log prints once I feel comfortable that everything is doing what it’s supposed to be doing.

Finally, I re-disable the executable permission on the native apachectl script as well as a few other executables that belong to OS X’s built-in Apache instance.

← Earlier Posts Page 1 of 35