Miniature electronics and global supply chains have us on the cusp of a new era of human experience. Early forms of wearable computing focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. What does this mean for us as designers and developers, and how can we build interfaces for the next generation of devices?
Who was here before us, and how can we best learn from them? These are the machines that will be a part of our lives in only a few years from now, and the best way to learn about the future is to dig into the past. This talk will focus on trends in wearable computing and VR as it developed from the 1960s to now, and then into the future. This talk will cover various topics on the history and future of wearables.
We'll learn about Ivan Sutherland, human augmentation, infrastructure, machine vision, processing, distributed computing and wireless data transfer, a church dedicated to VR, computer backpacks, heads up displays, reality editing, job simulators and unexplored realms of experience that haven't yet come to life. We'll also learn about the road from virtual reality to augmented reality and what we need to build to get there. This talk is for anyone interested in how we can add a new layer of interactivity to our world and how we can take the next steps to get there.
1. liveworx.com # L I V E W O R X
THE HISTORY
AND FUTURE OF VR AND AR
Amber Case | @caseorganic
MIT Media Lab and Harvard Berkman Klein Center
case@caseorganic.com
49. caseorganic.com
1. Machines shouldn't act like humans
2. Humans shouldn't act like machines
IV. Technology should amplify the
best of technology and the best of
humanity.
66. 66# L I V E W O R X
WE WANT YOUR
FEEDBACK!
Please remember to complete
your evaluation by selecting the
session in your mobile app.
Survey
Notes de l'éditeur
More about steve mann
But not the cyborgs you think.
Our first tools were extensions of the physical self
We’ve been cyborgs from the first tools
But – they’ve extended physical selves – not the mental selves.
Flickr: cybertoad but really we've always been borg from the first toolsAttribution-NonCommercial-NoDerivs 2.0 GenericYou are free:to Share — to copy, distribute and transmit the workUnder the following conditions:Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). What does "Attribute this work" mean? The page you came from contained embedded licensing metadata, including how the creator wishes to be attributed for re-use. You can use the HTML here to cite the work. Doing so will also include metadata on your page so that others can find the original work as well. Noncommercial — You may not use this work for commercial purposes. No Derivative Works — You may not alter, transform, or build upon this work.
And technology extendsthe mental self.
But these new tools bring with them very curious things.
They cry, and we have to pick them up.
We have to replace them.
There are a number of issues to address with wearables:
Look and Feel
The look and feel of a device is extremely important, as poorly-designed, yet workable HUDs will decrease a user's social status, this preventing wide adoption.
Transparency and Reduncancy
Steve Mann's successful HUD was a transparent display with input into one eye by laser input. Currently there are wearables that obscure the user's display from both eyes. Not only is this dangerous in terms of not having a back-up real-world sensor available at all times to the user (the user's own calibrated eye) but it increases the chances of nausea, and the entire contraption suffers from lag if the graphics are not rendered in real-time or if there is a network error.
Overdesign
Almost all AR is designed to "pop" or impress. Most of it is a trick pony that unnecessarily overstimulates the brain of a user. The example I always give is the early web and the giant rush of companies and startups to make an index or navigable way to "surf" the web. Many tried visual views of the different "sections" of the web, and some even tried to render a 3D view that users could explore around. However, users didn't want to "explore", especially over a 14.4K connection on a 233 Megahertz machine. E-mail was sufficient enough to receive hyperlinks to interesting things on the web. What people needed was an architecture that was optimized for speed. Google's no-frills and speedy interface provided that solution.
AR currently suffers from a bout of coolness and has not yet reached the trough of disillusionment. It is my hope that the future of AR will see the design of minimalistic interfaces that actually solve real-world problems. There is a long way to go to clear away the junk that has piled up around industry. Perhaps when the field matures it will no longer be called AR.
How can this be helped?
User Research
I then pointed out that there was probably a very simple way to know what entry-level real-life situations would be helped by HUDs. For instance, when you watch people using their phones, pay close attention to the situations in which people are completely stuck into their phone or can't do a task because they are trying to look at their phone and the real world at the same time while moving. see the situations when people are stuck into their phone - and then those instances are what needs to appear on the phone. those situations present problems that van be solved by entering those use cases into the head's up display.
Input other than gestures is not usually discussed
Knowing what the user needs
If we know exactly what is importer to the user we will know the problems to solve -/ we will not be able to above all of the problems -- but if we just solve one or two that is enough
Minimum Viable Product Features
The first iPhone was very simple. While it didn't have GPS or 3G, it make it easy to do some things better than others. It was an incremental progression over previous methods of interacting with data.
A Gradual Experience
Every user needs to have an experience that grows over time. They can't just start out with all of the complexity that a system provides.
A user is very trainable over time to the point that when they have a device that they know exactly what they are going to do with it when they pick it up or put it down. When you watch someone with a smartphone, they have an idea of what they want to do with it when they touch it. You can watch them know what to do when they open up their phone.
If we aim low we may have more chances of success. At the very least we should design a HUD that where we don't get nausea from it or receive too much information - somehow what to focus on - what is the thing to focus on - I think it can be a key.
What should be the thing you would like to have in glasses that you would like to have that would motivate the use of glasses vs. the use of the mobile phone?
If my car breaks down is it possible to become my own mechanic? That would be disruptive in taking mechanics out of the way. Order parts from amazon from the device that you need. Expert systems that overlay on the eyes that highlight the areas of work on the vehicle and teach the user how to fix minor problems.
Concept Models
The best way to get a product point across is a design model where someone really puts some thought into it. For some odd reason, designers don't really have to be able to build or wire up objects, although the best of them can. MIT's media lab teaches both design and development. Inseparable from each other.
And if one cannot 3D animate, carving an object or building it from paper and Photoshopping it can get the point across too. As long as the essence of the idea is communicated visually, what it takes to get it there doesn't matter one bit.
Fashionable, Feasible Prosthetics and Social Status
For wide adoption wearables need to be able to increase one's social viability. Vs. Detract from it. Not interfering with the social norm. Not detracting from one's sociability.
Like a Mercedes Benz or a BMW adds to your social status. An old Geo Metro may detract from social status, although it is a far more robust, affordable, gas-efficient and maneuverable vehicle.
References
Further Reading
To get to these hyperlinked memories, we must become increasingly skilled virtual paleontologists. The E-mail inbox is the best example of this. Every day our memories and data is covered by a new layer of dust, spam, and items to be responded to. If we need something from our past, we must dig through the newly accumulated items in order to get it. But instead of using a hammer and a chisel, brush and field notebook, we use keywords and search results, tags and categories.
Simultaneous time also causes social punctuation, as technosocial connectivity seeps into every part of social relations.
the future is already here, it's just unevenly distributed. so i try tolook at the past when I want to see the future. And one ofthese pieces of the future is this….
Steve mann.
Experimental set-up to induce the 'body swap illusion'.
"The body-swap illusions worked well even though the mannequin or the other person looked different from the participant. In the first experiment there was no significant difference in rating scores between male and female subjects in the synchronous illusion condition, despite the fact that we only used a male mannequin (N = 32, p = .613, F(1,223)=.257, ANOVA). Similarly, in the second experiment, male and female subjects alike were able to accept the arm of the female experimenter as their own. Further, we compared the threat-evoked skin conductance responses between males and females after threatening the new artificial body. To obtain sufficient numbers of males and females to enable a statistical comparison of the SCR, we pooled the data from the synchronous and asynchronous conditions where the stimulation was applied on the abdomen in experiments three and four. We found no significant difference in the illusion related SCR between males and females (p = .952, F = .004, Two Way Repeated Measures ANOVA). These observations suggest that gender identity, and differences in the precise shape of the bodies, are not important factors for perceiving a body as one’s own."
www.ts-si.org/neuroscience/3636-identity-a-the-illusion-o...
Original from journal.pone.0003832.pdf (page 4 of 9)
Collaborative Reality
According to Steve Mann: A shared reality or collaborative mediated reality is "a negotiation between two parties allowing one to temporarily access the viewpoint of another".
In this instance, this guy is walking around in the store trying to purchase milk. His wife at home can see what he's seeing and can help him choose the right product.
Microvision: Wearable Displays Gallery
www.microvision.com/wearable_displays/wearable_applicatio...
Applications of a device which places data over your eyes in seemingly innocuous glasses.
We’re all growing up connected.
Getting used to your second self.
Testing Doug Englebart's Cyborg Glove
With student of Donna Haraway testing Valerie Landau and Doug Englebart's Cyborg Glove
We’re all growing up connected.
Getting used to your second self.
HandyKey - Twiddler2 - one handed chording USB keyboard
Wearables:"Twiddler was one of the first components I bought when designing my wearable computer. After six years of everyday use, I wouldn't think of using a wearable without one. The convenience and ergonomic benefits become apparent with long-term use. In fact, for the last two years, the Twiddler and my wearable computer have replaced my desktop (e.g. my PhD thesis was written with the Twiddler).
When starting the MIT Wearable Computing Project, I issued every member a Twiddler as their primary text input device. With starting another group at Georgia Tech focused on wearable computing, I've just placed an order for 10 more Twiddler 1's. We've seen typing speeds of 60 words per minute, and an undergraduate has reported speeds up to 30 words a minute with only a weekend of practice. More generally, new users can learn the alphabet in 5 minutes and can be touch typing in an hour. Though it takes time for the fingers to "loosen up" to accomodate the new motion (much like learning to play an instrument or learning how to type on a desktop), many new users are up to 10 words a minute with a weekend's worth of practice, and current non-touch typists remark that it is easier than learning the desktop QWERTY keyboard.' "Thad Starner, Professor at Georgia Tech and former MIT Media Lab.
"I'd like to say that I have been very happy with the Twiddler. I've been tinkering with wearable computers for some 15 years now, and never come across a better input device. I've designed and built a number of input devices from microswitches and the like -- before the Twiddler was being manufactured, but I really do like the Twiddler, despite its 1 or 2 shortcomings. It gives me the same sense of tactile feedback that I get from a high quality microswitch, enabling me to control various kinds of apparatus without my needing to pay full attention to the screen...If you need any ``testimonials'' from an experienced tinkerer, designer, builder, and user of wearable computing, I'd be happy to recommend Twiddler to wearable computer users, over and above voice (or certainly at least in addition to), eye movement trackers, and all of the other ways of controlling computers or external devices."Steve Mann, Professor, University of Toronto, Electrical Engineering Dept.
www.handykey.com/
Keymap for Chording on the Twiddler
The keymap for chording on the Twiddler. On the right, each grid of 3 × 4 rectanglesrepresents the keypad from the user’s perspective. The shaded rectangles are the buttons that need to be depressed to type the character printed below each keypad. Also displayed is a four-digit textual representation of the chord.
----
Source:Experimental Evaluations of the Twiddler One-Handed Chording Mobile Keyboard
By: Lyons, Kent; Starner, Thad; Gane, Brian. Human-Computer Interaction, Dec2006, Vol. 21 Issue 4, p343-392, 50p, 6 Black and White Photographs, 2 Illustrations, 3 Diagrams, 6 Charts, 16 Graphs; DOI: 10.1207/s15327051hci2104_1; (AN 22914491) http://farm5.staticflickr.com/4012/4529035079_c2c262a370_o.jpg
VirtuSphere Virtual-Reality Simulator for Mil/LE Tactical Training
A company called VirtuSphere, Inc. (Sammamish, WA) has a product called, appropriately enough, VirtuSphere, which can apparently provide a rather unique Mil/LE tactical training and simulation experience. Due to its design, the VirtuSphere provides "infinite space" and claims to also provide "the most immersive [virtual reality a.k.a. "VR"] experience for simulated training, exercise and gaming." The VirtuSphere platform consists of a large hollow sphere that can rotate 360 degrees as the user walks, runs, somersaults, etc. inside it while wearing a wireless, head-mounted VR (virtual reality) display a.k.a. wireless VR headset. Co-invented by Nurakhmed “Ray” Latypov and Nurulla Latypov (both corporate officers at VirtuSphere, Inc.), the VirtuSphere has been developed with…
the assistance of a team of research scientists and developers at the HIT Lab (Human Interface Technology Lab) at the University of Washington, including Dr. Suzanne Weghorst, a senior research scientist and assistant director of research at the UW HIT Lab. The joint project between VirtuSphere and the HIT Lab was reportedly made possible through a Research and Technology Development (RTD) grant from the Washington Technology Center (WTC).
VirtuSphere was selected by the Office of Naval Research (ONR) for their Virtual Technologies and Environments (VIRTE) program (Phone: 703-696-0360, Email: 342_VR@onr.navy.mil) in October, 2005. Training & Simulation Journal (TSJ) reported on that event when it happened.
www.defensereview.com/virtusphere-virtual-reality-simulat...