Google Glass, the Mountain View company's entry into wearable computing, is polarizing. Some (me included) are excited about such a visible company shining the light of its considerable resources on something that has been a glimmer in a geek's horn-rimmed spectacles for years. Other's decry it as the death of all that's good. This is not a post debating the purity of the privacy unicorn, and possible bludgeoning posthaste.
Nor is this a post on augmented reality (AR, or - as the pedantic are quick to point out - what should be called mediated reality). There are plenty of folks aggressively building apps and services that take advantage of overlapping data to the world that is being perceived. A peek at what's happening there can be found with the Twitter hashtag #throughGlass (although, as I write this, I see it's mostly just pictures snapped with the headset).
While being able to snap photos without unlocking my phone is incrementally better, I'm more interested in radical gains. In the past one of those sources of revolutionary productivity increases has been investments to my multi-monitor setup. Going from one screen to two was a transformative. Expanding it to three was profound. When developing, the less switching I have to do between various windows, the better. When surfing the Csikszentmihalyi flow having something on screen is to have it at the forefront of my mind.
Despite progressively adding an inch here and there to my current setup, however, I'm still limited to a fixed viewpoint directly in front of where I'm sitting. That made sense in the age of heavy CRT monitors. Those where the lead-lined behemoths that required three requisition forms and a small forklift to move to another cubicle. Even with today's svelte flat screens, our integrated development environments (or IDEs) still maintain the kind of navigation (scrolling, window tabs, minimizing, maximizing) necessary because of limited visible size. But scrolling is a hack. Our necks are capable of a wide degree of articulation. Could a wearable computing option increase screen real estate devoted to our favorite IDE?
According to Google Glass specs the little heads up screen is the equivalent to a 25-inch HD screen about 8 feet away. However, that assumes that you're looking directly ahead at a fixed point; exactly what we're don't want. As Google's own Easter egg shows, there's plenty of opportunity for large scale displays with panoramic gestures.
A normal human neck can flex downward (an action referred to as 'flexion') 40 to 60 degrees; or enough to touch your chin to your chest. Extension is when you crane your head upward - another 45 to 70 degrees. Rotation is, you guessed it, going from left to right. From looking straight ahead you should be able to turn your head to either side 60 to 80 degrees. You would never scroll - you'd simply look up and to the left. You could have that library and the half dozen files where it is used open simultaneously.
Some might argue that the recent KickStarter darling, Oculus Rift, might be better for immersion. However, I can't see these being viable in a workplace environment. Best case, you aren't aware of your coworker happening upon you as you curse the ramifications of their recent checkin. Worst case, you'd remove the headset from a long deep-dive only to discover your compatriots have scrawled "BALLS" across the front in silver sharpie. At least that's what I'd consider (don't judge).