The Google Plus logo

Getting to know Google Glass

This post was written by Mark Arberdour and first appeared on the Epic blog on 5th February 2014.

man using google glassEpic is lucky enough to be part of the Glass Explorer programme, and this weekend I was pleased to be able to take a Google Glass home to learn and experiment with. We took the device up to the Learning Technologies 2014 show last week and it was a big draw on the stand and was great to introduce people to this new world of wearable technology.

People who are used to voice input for their computers will probably feel quite at home with Glass, such as Sat Nav and Siri users. I’ve not got any voice input devices myself, although my Android phone has Google voice search, but that’s not really been of interest to me until now. So I had to get over that initial self-consciousness of using voice input for the first time. Extroverts and show-offs may like that, but not me. A new name has even been coined for these folks: Glassholes.

Anyway, voice control is a major part of the Glass experience, so I got stuck in regardless. While navigating the Glass menu system by voice is very easy, the speech recognition comes unstuck when you reach your contacts directory. My surname turns out to be phonetically similar to a colleague’s, so Glass ended up sending several photos that I had intended for Mrs Aberdour to Gavin Beddow, Epic’s Head of Content Technology!

Fortunately you can also control Glass by touch. A tap on the side will open the menu, swipe backwards or forward along the side to scroll through menu options, tap again to select an option. Swipe down to go back one step. Push the button on top to take a photo, push and hold it to take a video. That was all quite easy to pick up, with the assistance of Google Glass Help, which I again had to access from my tablet while I was getting familiar with things.

There appears to be no way to open a URL directly which is what I immediately wanted to do, but you can do a Google voice search to get you online. However, you can’t search for a URL directly, at least the ones I tried. You need to launch a web page from the Google search results. The speech recognition in Google voice search is extremely hit and miss though, and gives some really bizarre interpretations of your voice commands. Given that the whole future of Glass depends on this feature, I think it puts things on pretty shaky ground. I can imagine there are a LOT of Google engineers focusing on speech recognition right now!

Once you get a decent search result you can tap to open it on the device’s web browser. The resolution is 640px wide by 360px high so mobile, responsive sites obviously work best. Sites can scroll but on first sight there appears to be no way to click links. The most intuitive thing to do is to simply use a voice command and read out the link you want to follow, but that didn’t work. I had to Google how to navigate a website on Glass, yet again reverting to my tablet for online help. It turns out you can actually select links using a two-finger tap, which lets you turn your head to scroll left and right, up and down, and select a link using a target icon, then tap again to select it. It’s actually really cool, and Google seem to have nailed it.

From what I understand though, the more interesting power of this device comes from its location-aware and context-aware services through integration with Google Now. I am keen to see more of its location and context-aware potential, as this is where Google Glass will tie into our work at Epic – as a device for learning and performance support. It makes me somewhat uncomfortable to enter into that world, though, because the more you become immersed in the world of Google Now, the more you have to tie everything in your life to your Google account. In the desktop/ tablet/ smartphone world my approach is to disable many Google options so that the company can only track the bare minimum of my data. But I think that to get the most out of the context and location-aware services and communications services that Google Glass offers, I’d have to turn all that back on and become Google’s digital serf in doing so.

It took me about an hour to become confident using Google Glass by both voice and touch control. There is an expectation these days that new gadgets should ‘just work’ without needing instructions, largely thanks to Apple’s incredible advances in user interface design. However, in the case of Glass, I’m quite happy with that investment of time, given the radical departure from the types of computing we’ve been used to in previous decades. As a bonus, it fitted OK on top of my ‘traditional’ glasses too, although not a good enough fit that I’m confident to venture outside. If this was my own Google Glass, I think I’d probably do it. I guess that means I’m kind of hooked, or at least wowed by the technology. It just means that I’d have to pander to Google’s whims in the process and that, frankly, fills me with dread. Maybe I’ll go and re-read Jaron Lanier’s You Are Not A Gadget and get some renewed perspective on that one before making a decision.

The original version of this post appeared on Mark’s blog, Open Thoughts.