2. What I want from my tech
future
Widely available high res scanning
Increasingly sophisticated virtual spaces to
inhabit
Next gen dimensional displays supporting this
Advancing tech like AI and more for avatars &
friends
Addressing issues
What if…?
Back to the future
14. State of the art virtual
characters
AI programming can help create Virtual
Humans that:
are very conversational and responsive
can see/perceive what the human with whom
they interact with is feeling
17. Avatars
An Avatar is a 3 dimensional
construct that represents a human
person• Currently used for game play
• Or for social VR
• Or for immersive video conferencing
Avatars allow us to essentially be in
2 or more places at once.
18. Avatar use is increasing
Avatar use continues to grow – especially
among the upcoming generations.
1.5 billion kids with avatars!
Kids are used to presenting themselves in this
way, and experiencing others as avatars. (free
social rime)
What do avatars do for them - for us? < huge
topic
We project some part of our self through our
avatar(s)
20. Hi Fidelity expression
tracking
How it does this
No need for markers
More sophisticated computer vision looks at facial
features instead
Eyes, nostrils, mouth lines, etc.
Or estimates facial movements from voice
intonation
Body behaviors can be brought in via a
video/depth sensing device (Kinect)
These will be integrated into our devices soon
21. What if we cannot actually pilot or
inhabit our avatar?
30. Future displays
• What if the display could control actual
molecules?
• A nano-molecular display?
• The molecules form solid objects, like chairs,
good enough to physically sit in
• The realization of the Star Trek Holodeck!
• Ivan Sutherland’s Wonderland 1965!
31. The Next Wave of AI
• AI research is on its second wave & is becoming
increasingly useful
• Virtual human AI can be expanded to include
• New models like AI based on neural mechanism models
• New architectures that supports better learning
• Extensions to create new behaviors interpolated from
known behaviors
32. Can my avatar learn from
me?
I can inhabit my avatar but it doesn’t know I
am there
In games, the other characters might have
some form of AI but they are NOT us –
avatars need this too.
Why? I really want my avatar to learn
FROM ME while I use it!
33. What will we want from our future AI
assistants?
Films starting to show the future we
might expect – Her, ExMachina,
SimOne, etc.
But what if AI can really understand
us or know what is best for us? Who
decides?
What if they really get emotions?
What does that even mean for an
AI?
Can AI get too real??
34. Can AI get too real??
https://www.youtube.com/watch?v=txSOaY-je-o
35. Issues
We can’t get there without a number of issues:
How real is too real? And whose reality?
Authenticity: Who is really there?
Who owns the data? Who owns our digital lives?
What is the responsibility of the keeper?
36. Issues
Authenticity
How do we know who is whom, who is inhabiting,
or if there is any one home besides the AI?
How will others know that construct is us,
whether digital or robotic?
SSI Self-sovereign identities will be an important
part of our digital futures.
38. More issues…
Who owns the data? Who owns our digital lives?
Do you own any photos of your great
grandfather?
A diary from an ancestor?
A library of fully interactive scans form something
like the Shoah foundation?
What is the responsibility of the keeper of that
data? Profit or preservation?
39. New generations will know very different realities from
what we know because of what we are inventing today.
It will be their task to live them, fully, to THINK about
not only what constitutes reality, but how all realities
can, do and will affect us as human beings.
The potential of fluid immersive realities
It’s going to be a wild ride!
The Future
@skydeas1
jfmorie at gmail
Notes de l'éditeur
What are the technologies coming in the near future that could change the way we live, connect and interact on a global scale?
It is a distributed future, and we can be sure that it will include ways to meet face-to-face instantly with anyone around the globe via digital and even physical avatars.
To set up my thinking -- cover some recent trends
Not really trying to predict the future.-- so many factors–big disruptors that happen that change the trajectories
That being said, it is still fun to explore what might be possible, given what we know today.
sophisticated ways to capture not only our 3D data in minute detail, but even the reflectance of our skins,
subsurface scattering that our unique layers of dermis and epidermis create to form the visual form others see when they look at us.
This happens a lot now for actors in films – so that the visual effects artists can use their digital double in scenes long after primary shooting is done.
And there are scanning booths in major cities where in less than 30 seconds your digital data can be captured. Most of these places will then make a physical 3D print of your scan.
The perfect Valentine’s day or Christmas gift for your loved ones.
This could be the first baby picture of your offspring not in a few years, but now.
How often will they get ”scanned” throughout their lives?
Well, consumer grade 3D “depthie” cameras are already available and have been for a few years.
Not as good quality as professional scanning systems, but great for the average person.
Cloud maps with snapshot truth textures
AND they can also do a depth map of the environments, AND you can then bring that into VR!
We have this tech in an enterprise form NOW but soon on consumer cameras!
This leads to a short digression into how amazingly well we can capture the environments around us. Beyond consumer grade ..
Gigapixel imagery, 360, and new techniques like those of Simon Che de Boer
Incredible photogrammetry but more -
Further work by Simon includes deep machine learning to extract ground truth images for relightable texture maps
But just around the corner – think of all the sensors being placed everywhere today. Smart homes Smart cities have a lot of knowledge about you– but how they use it is beyond our control
Virtual spaces can be just as smart – we can expect to see the rise of not only smart immersive space, but some will eventually be considered sentient too. Knowing so much about us that they can adjust to our needs and desires.
No 2 sense VR
Include as many senses as possible! Scent collar at
http://alltheseworldsllc.com/solutions/a-deep-inhale-scent-in-virtual-spaces/
Back to virtual characters. We have been able to capture much of our unique behavioral motions with full body motion capture suits
Many techniques – some require markers WITH markerless techniques being perfected
This data can be transferred to any similar character
Facial expressions for example, which embody so many of our human emotions, are extremely complex and require as many or more sensors than does the rest of the body!
VolCap or volumetric capture is now also widely available. Actors’ performances are being captured for replay in 360 video and other immersive media forms
Can do faithful movements WITHOUT a ton of sensors on our person.
VolCap studios are springing up globally. Muki showed a map of these in her keynote Thursday.
Describe Reggie’s performance for NASA study; more on NASA later
To David Bowie, who was actually created from 2D images and videos for the fabrication of his digtal data
(and has no AI but is performed by an actor..…)
Virtual Humans, as they prefer to be called, are becoming more sophisticated
And they can be very convincing -
I m going to show you a couple brief examples of these types of Virtual Humans.
Or these two girl guides from the BMOS 10 years ago now. I was part of this project.
The kids visiting the computer center could talk to these virtual humans in natural language
These virtual humans can see/perceive what the human with whom they interact with is feeling, but they still cannot learn; what an AI agent knows must be put in by people – a constrained range of responses that makes it seem to be intelligent
Avatars are a special breed of virtual humans – in that they are meant to be inhabited, driven, used by an actual human
Therefore they are (unlike other virtual humans or virtual influences, or even AI driven characters we have just seen.
Data fro 2013. No, these kids are inhabiting virtual worlds instead …
… with an avatar! Why
The only free time the kids have to test their social skills, to be with their friends without supervision in within these social connected virtual worlds.
We can do this now in many social VR applications with digital avatars. But what is missing?
Well, actual expressions maybe, but Hi Fidelity has a solution figured out
Consider future astronauts going to MARS. The ANSIBLE project for NASA did.
Their social interactions are going to be asynchronous because of the communication lag,
We gave them worlds to ease social isolation & sensory monotony
social interaction asynchronous ( email/chess), some sort of seemingly rt interactions…
Such avatars can embody the family members movements, expressions, speech patterns and more. Embodied “recording
Not rt, but is doesn’t leave the social interaction totally hanging waiting for that immediate response.
Example of use of recorded/asynchronous avatars
So – for this practical use the question is still – How can they become true representations of US? What will it take?
We need avatars that know how to learn from us WHILE WE USE them.
Then they can operate without us. More later
We can interact now in many social VR applications with digital avatars. But what is missing?
The physical component of human to human connection…
Many technologies must be invented; existing ones must be merged. From haptics to VR to AR to better UI….
Better robots, safer, more humanoids.
But when this happens – YOU CAN hug your grandma every night.
Robotic avatars – useful for many things, but also for closer human connections
Imagine when we have better haptics! When a handshake feels like a hand to the operator at some remote location.
Here I am getting set up with a high end haptics hand that will allow me to feel remotely what my robot hand is actually touching.
These are the kinds of Tech XPRIZE wants to encourage
This will happen, in many ways. I call these blended or fluid realities
It is happening now, with AR overlays, Pokémon popping up all over the neighborhood.
But this really means we need new and better display technologies.
Some next gen display are being developed… but most are not scalable or consumer useful
But we need displays that are even more radical to blend realities
Future displays are going to seem as radically different to us as Raster displays were from vector-based ones, as plasma and OLEDs are from rasters.
Light field displays will allow us to focus on different distances in one display. Imagine that!
MAYBE they will become part of us, our bionic, transhumanist future selves.
DARPA has been working on contact lens displays, aka bionic eyes.
Sony recently patented a one of these, as has Google
But what if we didn’t have to wear the display device?
The best user interface of the future will be tightly aligned with the one we live in now – reality
No learning curve
As we will see later, I am not the first person to propose this.
We have the IBM Watson AI making better medical decisions than highly trained doctors.
We have Baby X being developed in New Zealand by Soul Machines – a new kind of AI to support better VHs – but we need even more.
David Bowie will never be who I want until these advances happen
And they need to be able to learn. There are people working on this-- new architectures needed
If an avatar doesn’t learn, it doesn’t change, and it doesn’t change – it is significantly less useful.
It seems contradictory at first – I am the one who pilots, or inhabits my avatar – so I am always there to tell it what to say, how to behave.
I can control my avatar but ONLY when I am logged into it.
My friends in the virtual world cannot see me, or interact with me. I am NOT THERE for them.
But for now we don’t have those avatars – we have more and more sophisticated virtual assistants – our siris and alexas
Personalised? like the Young Victorian Girl’s Primer from ….
Who decides what we will learn, what social values we will have?
At some point we have to ask ourselves:
https://www.youtube.com/watch?v=txSOaY-je-o
We have not begun to discover all the issues this may take. Here are only a few.
We have seen the first one as the subject of a new comedy film, but there are serious ramifications.
Like Microsoft’s AI bot Tay, that had to be shut down because it devolved into being so racist.
Who makes the rules, the social expectations, the ethical decisions making algorithms?
How much control can we really have is AI’s are programmed to evolve on their own?
David Bowie shown earlier. James Dean and more dead actors going digitally ”live”
What does SSI cover? Crucible is calling this a “digital soul” and links this to your digital reputation.
Several other companies are also working to make this happen.
Do you own any photos of your great grandfather?
A diary from an ancestor?
A library of fully interactive scans form something like the Shoah foundation?
What is your responsibility to these artifacts of someone’s life?
Back to the future. We opened the treasure chest of a few possibilities the future could hold,
especially as it relates to how we might experience it as human beings.
Moving towards a seamless blending of the physical and the virtual,
the imaginary and the yet to be discovered – maybe the spiritual, the meta physical. There are the coming fluid realities.