What Does The Emotiv Headset Mean for the Future of UX Design?

The Emotiv EPOC Headset is a spidery-looking bit of headgear that translates electrical impulses from your brain, head movements and facial expressions into digital input. And, as discussed in depth here, it has a .NET API.

For serious. They have an API for your thoughts now.

Setting aside the obvious implications for gamers and people with disabilities (and, I guess, gamers with disabilities), what could this sort of device mean for the future of business applications?

Last year, Siri delivered a serious upgrade in human-machine interaction to the masses (at least, those masses who could swing an iPhone 4S in this economy). Android is still flailing to match it. Even the latest version of Windows bothers with the mouse and keyboard only under the heading of backward compatibility.

Tomorrow’s more successful nerds will need to learn to see around the ways in which the traditional KVM interface to software has blocked in our thinking. What will be possible and probable when you can regularly expect to speak and think at your applications? What will the screen look like?

Consider that it’s the thrilling year 2012 now, and we are still making plenty of audio-only telephone calls. That’s not because the technology for video phones isn’t there. The ubiquitous Videophone never happened because the use case was fiction—a real telephone conversation involves walking, driving, or just not necessarily being presentable or stationary in any way.

So, maybe we only bother with the screen under the heading of backward compatibility.

Maybe the new interface is that would-you-like-fries-with-that sort of headset, with the occasional display available when needed. Maybe the fact that I’m picturing a headset at all is paleofuture, and rooms will just come standard with transducers for your innermost twitches and ideas. Creepy.

Anyway. Back to mind controlling your software. Here’s where you can check out the Emotiv development experience for free, complete with a software emulator for the headset.

It Makes One of These Out of U and I

While I was taking in a recent episode of Dot Net Rocks (the one covering the new LightSwitch rapid development platform), co-host Richard Campbell said something in passing that caught my ear. He described a hypothetical enterprise development scenario in which the “senior guys are building a set of WCF services that the, um, client-building guys can access”.

I batted an eye. Why are we assuming that our senior developers should be focused on the server side code?

This is no slight to Campbell, of course—he probably meant nothing by it at all, good-natured Canadian citizen that he is. But it called to mind tales I had heard before of concerns being separated across a team in this way. The more experienced members of the development staff get assigned the IMPORTANT BACK-END WORK, while they let the new kid, y’know, bring up some screens.

Sure, the nether-layers of your stack contain critical elements that must be tightly engineered for performance and accuracy. There is little value in a brilliantly laid-out page on a website that can’t perform, or in a screen that renders the wrong answers in a gorgeous typeface.

But is this really a working model for a team shipping successful software? Can we afford to regard an application’s User Experience as a foregone conclusion, or to pretend that the real art to be created is in the logic and plumbing of the application?

In the age of jQuery, WPF/Silverlight, and the iPad, our users are expecting a friendlier and more compelling experience from their software every day. Is the server really the place that a seasoned application developer can provide the most value? Would their skill and experience not be better put to use connecting meaningful information with the human being on the other side of the screen?