1. Reality as a
Knowledge Medium
Fridolin Wild1,2)
1)Oxford Brookes University
2)Open University, UK
2. A note on methodical order
Conversation is the root of
all information exchange.
Narratives convert tacit
knowledge to explicit knowledge.
# 2
No, all embodied
experience!
Just language?
6. Affordances:
which cultures of
use to support?
Viewpoints:
from what
perspective?
Abstraction:
what matters?
what to pay
attention to?
Editing:
how to enrich or
reduce?
Social Scope:
for individuals,
teams, or more?
Sensing:
which senses and
what sensors?
Grand
Challenges?
(Fominykh, Wild, Smith, Alvarez, & Morozov, 2014)
Capture | Re-Enact
7. Research questions (p.14)
RQ01. How to enrich the capture of activities and
experiences by means of wearable sensors?
RQ02. How to experience more of the captured
activities and experiences via AR and WT or a remote
simulation?
RQ03. To what extent can the knowledge of an expert
be captured as wearable experience?
RQ04. To what extent can a trainee experience the
phenomenology of the expert through applying
wearable experience?
# 7
13. Capturing (p.10, p18)
Wearable ambient and biofeedback sensors
Tracking human position and orientation of the person in space/environment
Gaze direction
Capture the narration and ambient sounds
videos of the vision area
360 video
Audio
Gestures
along with affect data
and physiological data
# 13
14. Re-enactment
All gathered sensor data will be stored and synchronized as a single
experience recording. This tangible artifact will be available for trainees.
Second, the re-enactment of the captured experience will be achieved by
augmenting trainee experience with contextualized expert data in real
time.
For example, by 3D scanning the environment the system will know the
position of both the expert and the trainee in space. Therefore, the relative
position of the expert can be displayed for the trainee as an AR element.
Data from other sensors will make the trainee aware of where the expert is
looking, what the expert is seeing, what the expert is saying, how the
expert is handling the tools with hands, and more. In such a manner, the
trainee will be able to experience the presence of the expert while working
on a task.
# 14
15. Hardware + SDKs (p.19f)
# 15
Type of Product/service Examples Use in WEKIT
Eye tracking sensors
Head-mounted eye trackers from vendors such as: SMI,
Arrington Research, Tobii, ASL, MindMetriks
Measuring where the user is looking in a scene and at
what depth to correlate with other metrics
3D scanners Occipital’s structure.io, Microsoft Kinect, Intel Real Sense
Quick object scanning; body posture / limb tracking in
realtime; depth sensing for awareness of workplace
Hand/Finger posture and gesture
tracking wearable sensors
MYO, LEAP Motion, Vuzix
LEAP Motion will be used for hands-on teletutoring (aka
‘GhostHands’ instructions) or can be integrated with
Vuzix Smart glasses
Cameras
Machine vision cameras from vendors such as: Basler,
ImagingSource, PointGray, PixeLink. Crowdemotion
For capturing the face region (facial expressions) of the
user.
360 cameras ( optional) V.360, Theta360, PanoPro, GeoNaute, Bubble Capturing 360 degree view.
EEG sensors
MyndPlay EEG headbands, Interaxon Muse, Shimmer EEG
Raw and Bioexplorer
Worker well-being, stress, mental effort, complacency,
agitation
Heart rate wearable sensors e-Health Sensor Platform, ShimmerSense, Heartmath Worker well-being, stress to correlate with GSR and EEG
Skin conductivity wearable sensors e-Health Sensor Platform, Shimmer sense Worker well-being (stress level)
Smart Glasses
Google Glass, Epson Moverio BT200, Vuzix M100, and Meta-
1
More examples can be found in Smart Glasses Market Report
For capturing the user’s point of view, gesture, voice
recording and prompts.
AR platforms Unity, Vuforia, Metaio, Occulus, Samsung Gear, HTC Vvo
To recreate scenarios and simulated training
environment
17. WEKIT project is funded by the European Commission = EU disclaimer
http://wekit.eu/
Ces’t tout.
fridolin.wild@gmail.com
skype: fridolin.wild
whatsapp: +447751239881
# 17
Notes de l'éditeur
“The kinds of experiences most people think of as entertainment—watching television, attending a concert—tend to be those in which customers participate more passively than actively; their connection with the event is more likely one of absorption than of immersion. Educational events—attending a class, taking a ski lesson—tend to involve more active participation, but students (customers, if you will) are still more outside the event than immersed in the action. Escapist experiences can teach just as well as educational events can, or amuse just as well as entertainment, but they involve greater customer immersion. Acting in a play, playing in an orchestra, or descending the Grand Canyon involve both active participation and immersion in the experience. If you minimize the customers’ active participation, however, an escapist event becomes an experience of the fourth kind—the esthetic. Here customers or participants are immersed in an activity or environment, but they themselves have little or no effect on it—like a tourist who merely views the Grand Canyon from its rim or like a visitor to an art gallery.” (Pine & Gilmore, 1998: https://hbr.org/1998/07/welcome-to-the-experience-economy)
With respect to Virtual and Augmented Reality applications, there are several dimensions along which systems and approaches differ significantly from each other. We identify six such key dimensions: learning affordances, perspectives & viewpoints, levels of abstraction, editing facilities, social scope, and sensors & senses.