Tags

, ,

Part I:  Recent User Experience Changes

In a three-part series (Part 2 & Part 3, we’ll briefly explore relatively recent technological changes and on their impact on our interaction with books, movies, and music. While there are numerous implications for producing the content, I’ll mostly focus their impact on consuming content. It’s pretty clear that the time is coming when books, movies, and even music will be produced in different forms than they are now, but that would be a different series.  In this first part, we will focus on the changes we’re seeing, the second part will be about changes to how we consume the content and the final part will be the disruptions we’re already seeing and reflect on near-term changes.

SurfacePro3Touch

Some of the patterns that are emerging with this year’s technology, beyond wearables and connecting items that aren’t computing devices (“the internet of things”), are changes with the user experience especially using touch and smart, voice-enabled interfaces. In particular, we’re seeing embedding of OS-like features crop up within devices and applications such as search. There is a clear focus on voice requests/commands and intelligent voice search for taking on more tasks in more places. We’ve seen this with the intelligent speaker Amazon Echo (a.k.a. Alexa) and on our phones with Cortana, Google Now, and Siri. We also see less intelligent voice-activated search like “OK Google” which translates your search request but returns the results in a traditional manner.  Soon we’ll see it via Cortana in Windows 10. We have gone from disconnected PDAs to phone (connected) PDAs to phone computers to embedded intelligent personal assistants. Using these personal assistants went from a physical keyboard to a virtual keyboard, including Swype, to now using voice interface for straightforward tasks such as setting up appointments. We’re also seeing those assistants initiate actions such as rerouting our drive based on traffic or informing us when to leave for an appointment earlier than normal due to traffic.

So, we’ve been on this journey with touch as an interface for our computers and phones. Now were augmenting further with voice. I have been using touch on computers, specifically the Surface RT, the Surface Pro 3, Lenovo Yoga, and, of course, my Lumia 920 Windows phone. So, touch has been an integral part of my computing experience for roughly 2 years. While I don’t use touch for everything, and the keyboard/mouse combination still does the Yeomen’s work for creating content, I do find myself irritated when touch isn’t available. It is a natural response when I’m showing somebody something to touch the screen and expect it to respond. It has taken less than two years for touch to be an expected available interface. I suspect I’ll expect voice-enabled interface in an even shorter period of time. Currently, I use Amazon Echo at home; I also have an Amazon Fire Stick using an Android phone as the remote control with its voice interface to search and perform actions on the TV. Finally, I use Dragon NaturallySpeaking for initial draft writing. I’ve become spoiled quickly. On the Echo, for example, there is a physical remote control that I rarely use even if it takes me a few more times to make myself clear to Alexa (the “wake-up word for the Echo and the name by which our family knows her). I will train it rather than have to actually to pick up a remote. Pathetic, I know.

How will all of this verbal engagement with the objects around us change the way that we interact with them and others will be interesting to see. We’ve all seen the funny guy who appears to be talking to himself who’s merely speaking with his Bluetooth headset. We’ve come to understand that, typically when somebody’s speaking to thin air, there is a Bluetooth headset involved somewhere. Now we’re speaking to our phones, to our computers, and even interacting with our text messaging via voice. It becomes a little odder to be around this activity. For example, I can be in a normal conversation in our car, and receive a text while I’m driving. I’m verbally notified of a text coming in with my Bluetooth headset and I’m asked if I want to read it, so I say the words “read it” out loud with no context to anyone else. No one else heard the announcement since the headset’s in my ear. They don’t know what I’m talking to them about, then it dawns on them – oh he got a message. It’s funny how we get used to things quickly including non-contextual speaking unrelated to what’s going on around us.

RajaniemiQuantumThief

Hannu Rajaniemi, author of The Quantum Thief

We have biometric security. We have devices that tell us to get up to go for a walk, what our heart rate is and how many calories we’ve burned. We have devices that tell us when to go to a meeting, when our next task is and to go early due to heavy traffic. We can touch, talk, listen and, yes, even type with various devices to get them to do what we want. They are being embedded in our lives in ways they have never been before. They manage the temperatures in our house, our refrigerated foods, our home security and lights, and our cars. We are depending on them to augment our memory and help us easily retrieve information. We have yet to determine how they change us and what this dependence means. (For a cool novel that ponders these questions, especially about our dependence on intelligent devices, see Alena Graedon’s The Word Exchange.  Hannu Rajaniemi also explores the issue in The Quantum Thief (but continues that exploration in the rest of the Jean le Flambeur series) with the use of exomemory (memory storage) and gevulot (public/private key handshake protocol) where politeness and societal standing dictates how much you share of yourself (open your gevulot) and keys dictate access to everything including your exomemory.

GraedonWordExchange

Alena Graedon, author of The Word Exchange

So here we are in this touch, voice, and learning OS world where there is predictive searching, predictive commands, and predictive keyboards; a world in which search results are pre-fetched based on patterns of use. This makes things easier and more convenient. The challenge, of course, is the security, safety, and privacy of our interactions with the computing devices and all of that lovely data. When Cortana can tell I’m near a store and remind me to get the milk, it’s a beautiful thing in terms of convenience and avoiding spousal chiding for forgetting once again (not to say that’s ever happened to me). However, that means Cortana is aware of my location, my intent (at least from a shopping perspective), and my history, especially on responding to such suggestions. That’s not to say that to do things more efficiently we should never risk anything. It’s simply to say we need to be going in with eyes wide open, be prepared to deal with the fallout from stolen or misused data, and push companies to mitigate the risk to a reasonable level.

Finally, we don’t know how these changes in our interactions affect us long-term. We could do, for example, some studies regarding executives from the 40s through say the 70s who used dictation devices and/or secretaries to extrapolate to our experience with voice interaction. While there is some difference in knowing that a human is on the other end of that dictation, even if it’s a human listening to a dictation device, effects on our thinking by a verbal approach to wording and any other behavioral changes could be similar. It’s also not to say that all the changes would be bad, it simply that we need to look at them. It may be, for example, that touch interface allows us to mentally automate some actions to free us up to deeper thinking and may involve more disparate parts of the brain throughout the process which builds synapse bridges and improves our thinking. In summary, we need to research, review, and be willing to adjust based on what we discover about the different types of interface.

DragonNaturallySpeakingSennheiser

One nice benefit I’ve noticed in writing with voice dictation for the initial draft is that I more carefully my review of the text. In other words, if I did an initial write-up by typing, I subconsciously presume that what I typed is what I intended (despite a plethora of evidence to the contrary). It was hard to catch things like typos, omissions or poorly phrased paragraphs. My best method for catching these was to read what I had written out loud. While it’s not fool-proof (and no substitute for having a genuine editor review the work), it helps reduce errors considerably. However, prior to using voice dictation, I would reserve this level of effort to high-profile communication (such as email sent to the company-at-large, writing for public blogs, or for freelance writing) and not do it for less formal writing. If the draft is done through voice, however, I always review and review more carefully. While Dragon NaturallySpeaking does a tremendous job (especially paired with my old Sennheiser headset) of translating my speech to text, not only does it have some errors, mostly due to my enunciation challenges, it also has some punctuation and capitalization challenges (not because you can’t insert them via voice, but I don’t always speak out punctuation or capitalization). So, review is always warranted. Essentially, I am able to use voice dictation for written communication that ups my game in terms of polish while requiring no significant additional time or effort than typing. Clearly, we see a behavior change based on technology.

We looked at some trends in technology with some general questions raised about their implications. In the next part, we’ll begin to explore those implications as applied to books, music and movies.

Advertisements