SlideShare une entreprise Scribd logo
1  sur  27
Télécharger pour lire hors ligne
1
Project Report On
AUGMENTED REALITY(AR)
By
SINGHAN GANGULY
Department of Electronics & Communication Engineering
Future Institute of Engineering & Management
Sonarpur Station Road, Kolkata-700150
2020
2
Future Institute of Engineering & Management
Affiliated to
Maulana Abul Kalam Azad University of Technology (
formerly West Bengal University of Technology )
Certificate of Approval
The foregoing seminar report is hereby approved as a creditable study of an
engineering subject carried out and presented in a manner satisfactory to warrant its
acceptance as a prerequisite to the degree for which it has been submitted. It is
understood that by this approval the undersigned don’t necessarily endorse or approve
any statement made opinion expressed or conclusion drawn therein but approve the
report only for the purpose for which it is submitted.
Signature of the Examiners:
1…………………………………………….
2…………………………………………………………
3…………………………………………………………..
4…………………………………………………………..
5……………………………………………………….
3
Acknowledgement
It is a great pleasure to express my deepest gratitude and indebtedness to my respected
Professors of Department of Electronics and Communication Engineering, Future Institute
of Engineering & Management, Kolkata for their supervision, constant help and
encouragement throughout the entire period.
I am very much grateful to Prof. (Dr.) Dipankar Ghosh (Head of the Department),
Department of Electronics and Communication Engineering, Future Institute of Engineering
& Management, for necessary suggestions and helpful scientific discussions to improve the
quality this thesis.
Singhan Ganguly
Department of Electronics & Communication Engineering,
Future Institute of Engineering & Management
4
INDEX
1. Introduction Pg. 6
2. Overview Pg. 7
3. Working Principal Pg. 19
4. Analysis Pg. 20
5. Modification Pg. 23
6. Conclusion Pg. 24
7. Future Scope Pg. 25
8. Reference Pg. 26
5
LIST OF FIGURES
FIGURE NO. FIGURE NAME
Fig 1 AR Based Workspace
Fig 2 Pokemon Go Game Interface
Fig 3 Houzz Application Interface
Fig 4 Youcam Makeup Application Interface
Fig 5 Projection Based AR Dialer
Fig 6 Optical see-through HMD conceptual
diagram
Fig 7 A conceptual diagram of a video see-through
HMD
Fig 8 Futuristic AR Interface
6
INTRODUCTION
Technology has advanced to the point where realism in virtual reality is very achievable.
However, in our obsession to reproduce the world and human experience in virtual space, we
overlook the most important aspects of what makes us who we are—our reality. Yet, it isn’t
enough just to trick the eye or fool the body and mind. One must capture the imagination in
order to create truly compelling experiences.
On the spectrum between virtual reality, which creates immersible, computer-generated
environments, and the real world, augmented reality is closer to the real world. Augmented
reality adds graphics, sounds, haptics and smell to the natural world as it exists. We can
expect video games to drive the development of augmented reality, but this technology will
have countless applications. Everyone from tourists to military troops will benefit from the
ability to place computer-generated graphics in their field of vision.
Augmented reality will truly change the way we view the world. Picture oneself walking or
driving down the street. With augmented-reality displays, which will eventually look much
like a normal pair of glasses, informative graphics will appear in your field of view and audio
will coincide with whatever you see. These enhancements will be refreshed continually to
reflect the movements of your head. In this article, we will take a look at this future
technology, its components and how it will be used.
With the introduction of Augmented Reality (AR) as being coined the term in the early
nineties, we were able to apply virtual objects within physical reality. Combining techniques
of Sutherland/Sproull’s first optical see-through Head Mounted Display (HMD) from the
early 1960’s with complex, real-time computer-generated wiring diagrams and manuals. Both
were registered with each other and manuals were embedded within the actual aircraft for
intensely detailed procedures.
Augmented reality (AR) refers to computer displays that add virtual information to a user's
sensory perceptions. Most AR research focuses on see-through devices, usually worn on the
head that overlay graphics and text on the user's view of his or her surroundings. In general it
superimposes graphics over a real world environment in real time.
Getting the right information at the right time and the right place is the key in all these
applications. Personal digital assistants such as the Palm and the Pocket PC can provide
timely information using wireless networking and Global Positioning System (GPS) receivers
that constantly track the handheld devices. But what makes augmented reality different is
how the information is presented: not on a separate display but integrated with the user's
perceptions. This kind of interface minimizes the extra mental effort that a user has to expend
when switching his or her attention back and forth between real-world tasks and a computer
screen. In augmented reality, the user's view of the world and the computer interface literally
become one.
Augmented reality is far more advanced than any technology you've seen in television
broadcasts, although early versions of augmented reality are starting to appear in televised
races and football games. These systems display graphics for only one point of view. Next-
generation augmented- reality systems will display graphics for each viewer's perspective.
7
OVERVIEW
DEFINITION :
Augmented reality (AR) is a field of computer research which deals with the combination of
real world and computer generated data. Augmented reality (AR) refers to computer displays
that add virtual information to a user's sensory perceptions. It is a method for visual
improvement or enrichment of the surrounding environment by overlaying spatially aligned
computer-generated information onto a human's view (eyes)
Augmented Reality (AR) was introduced as the opposite of virtual reality: instead of
immersing the user into a synthesized, purely informational environment, the goal of AR is to
augment the real world with information handling capabilities.
AR research focuses on see-through devices, usually worn on the head that overlay graphics
and text on the user's view of his or her surroundings. In general it superimposes graphics
over a real world environment in real time.
An AR system adds virtual computer-generated objects, audio and other sense enhancements
to a real-world environment in real-time. These enhancements are added in a way that the
viewer cannot tell the difference between the real and augmented world.
PROPERTIES :
AR system to have the following properties:
1. Combines real and virtual objects in a real environment.
2. Runs interactively, and in real time.
3. Registers (aligns) real and virtual objects with each other.
Definition of AR to particular display technologies, such as a head mounted display (HMD).
Nor do we limit it to our sense of sight. AR can potentially apply to all senses, including
hearing, touch, and smell.
HISTORY :
The beginnings of AR, as we define it, date back to Sutherland’s work in the 1960s, which
used a see-through HMD to present 3D graphics. However, only over the past decade has
there been enough work to refer to AR as a research field. In 1997, Azuma published a
survey that defined the field, described many problems, and summarized the developments up
to that point. Since then, AR’s growth and progress have been remarkable.
In the late 1990s, several conferences on AR began, including the international Workshop
and Symposium on Augmented Reality, the International Symposium on Mixed Reality, and
the Designing Augmented Reality Environments workshop. Some well-funded organizations
formed that focused on AR, notably the Mixed Reality Systems Lab in Japan and the Arvika
consortium in Germany.
8
SOME EXAMPLES:
Augmented reality uses existing reality and physical objects to trigger computer-generated
enhancements over the top of reality, in real time. Essentially, AR is a technology that lays
computer-generated images over a user’s view of the real world. These images typically take
shape as 3D models, videos and information.
How this is overlaid depends on the nature of the experience - and the hardware you’re
looking at the experience on. The simplest way is using your phone - where what you see
through the camera has digital elements added to it.
There are plenty of opportunities for designers and brands to explore the possibilities of
augmented reality (AR) and how it can enhance creative work, provide entertainment
or better society.
Tech giants such as Microsoft, Google and Apple – but also everything from children’s books
to 3D modelling for gaming – are experimenting with AR. The abundance of free content
creation apps are democratising AR, which means anyone (not just developers) can create
their own AR experiences.
Fig 1: AR Based Workspace [Ref 1]
9
1. AR Game (Pokemon Go):
Pokémon GO, considered the breakthrough AR app for gaming, uses a smartphone’s
camera, gyroscope, clock and GPS and to enable a location-based augmented reality
environment. A map of the current environment displays on the screen and a rustle of
grass indicates the presence of a Pokémon; a tap of the touchscreen brings up the
capture display. In AR mode, the screen displays Pokémon in the user’s real-world
environment.
The algorithm is as follows:
 Location Tracking: Much like Google Maps or Wayze, "Pokemon Go" tracks your
phone's location using GPS, integrating this information with an in-game map.
 Deciding Where (And When) Pokemon Appear: the game uses local geographic
data to place the pokemons in appropriate habitats. the game uses your phone's
clock to track the time, so if you're out hunting at night, you're more likely to
see fairy or nocturnal types.
 Visualizing The Pokemon: Pokemon Go uses your phone's camera to place an
image of a Pokemon within your surroundings, and the GPS, accelerometer, and
compass give the game an idea of which direction your phone is pointing toward.
In this way ultimately a pokemon is caught.
Fig 2: Pokemon Go Game Interface [Ref 2]
10
2. AR Based Furniture Application (Houzz App) :
A great platform for home goods and furniture selling, Houzz is one of the top apps
for planning interior layouts and design. Primarily a home improvement app, Houzz
has ecommerce functionality, allowing users to browse and buy products in-app.
The “View in My Room” feature uses the existing sensors and camera in the iPhone
to produce AR images placing the selected furniture in a desired real time place
within the user’s room. It even goes as far as showing what the product will look like
in different lighting.
The user can move your furniture around in 3D space, resize it to fit the room (which
can be a bit of an issue, given that the size of a furniture item is kind of important and
not something the user can change after the fact), and then take a snapshot of your scene
to share with others (including the home remodelling professional also found on
Houzz).
This scene doesn’t stay anchored in the room, though. If the user moves the phone or
tablet around, everything else moves, too. So this is essentially just a basic 3D editor
with the camera image projected in the background.
Fig 3: Houzz Application Interface [Ref 3]
11
3. AR Based Makeup Application (YouCam App):
It is an AR based application for doing makeup all over the face to enhance the beauty
of the person in real time . From the technology implementation, YouCam Makeup and
other cosmetics apps are enabled with 3 core components:
 Face tracking algorithms to detect and track the face for real-time experience
 Neural networks to extract the needed area of the face e.g. lips, hair, skin,
eyebrows, etc.
 3d rendered that enables realistic makeup representation in terms of colour,
lighting and texture (glossy, matte, etc.)
In proprietary makeover solutions, these 3 components are joined into one virtual
makeup.
‘
Fig 4: Youcam Makeup Application Interface [Ref 4]
12
4. Projection Based AR:
Just like anything else projection based AR feels more attractive (at least as of now)
compared to an AR app you can install on your phone. As is obvious by its name,
projection based AR functions using projection onto objects. What makes it
interesting is the wide array of possibilities. One of the simplest is projection of light
on a surface. Speaking of lights, surfaces and AR, did we ever think those lines on
your fingers (which divide each finger into three parts) can create 12 buttons? If we
have a look at the image and you would quickly grasp what we’re talking about. The
picture depicts one of the simplest uses of projection based AR where light is fired
onto a surface and the interaction is done by touching the projected surface with hand.
The detection of where the user has touched the surface is done by differentiating
between an expected (or known) projection image and the projection altered by
interference of user’s hand. Projection-based AR can build you a castle in air, or ma
dialer on hand.
Fig 5: Projection Based AR Dialer [Ref 5]
13
DIFFERENT AR MECHANISMS:
AR system tracks the position and orientation of the user's head so that the overlaid material
can be aligned with the user's view of the world. Through this process, known as registration,
graphics software can place a three dimensional image of a tea cup, for example on top of a
real saucer and keep the virtual cup fixed in that position as the user moves about the room.
AR systems employ some of the same hardware technologies used in virtual reality research,
but there's a crucial difference, whereas virtual reality brashly aims to replace the real world,
augmented reality respectfully supplement it. Augmented Reality is still in an early stage of
research and development at various universities and high-tech companies. Eventually,
possible by the end of this decade, we will see first mass-marketed augmented reality system,
which one researcher calls "The Walkman of the 2 Is1 century". What augmented reality'
attempts to do is not only super impose graphics over a real environment in real time, but also
change those graphics to accommodate a user's head- and eye-movements, so that the
graphics always fit and perspective. Here are the three components needed to make an
augmented-reality system work:
1.Head-mounted display 2.Tracking system 3.Mobile computing power
1. Head-Mounted Display: Just as monitor allow us to see text and graphics
generated by computers, head-mounted displays (HMD's) will enable us to view graphics
and text created by augmented-reality systems. There are two basic types of HMD's:
a. Optical see-through display : A simple approach to optical see-through display
employs a mirror beam splitter- a half silvered mirror that both reflects and transmits
light. If properly oriented in front of the user's eye, the beam splitter can reflect the
image of a computer display into the user's line of sight yet still allow light from the
surrounding world to pass through. Such beam splitters, which are called combiners,
have long been used in head up displays for fighter-jet- pilots (and, more recently, for
drivers of luxury- cars). Lenses can be placed between the beam splitter and the
computer display to focus the image so that it appears at a comfortable viewing
distance. If a display and optics are provided for each eye, the view can be in stereo.
Sony makes a see-through display that some researchers use, called the "Glasstron".
Fig 6: Optical see-through HMD conceptual diagram [Ref 6]
14
b. Video see-through displays : A video see through display uses video mixing
technology, originally developed for television special effects, to combine the image
from a head worn camera with synthesized graphics. The merged image is typically
presented on an opaque head worn display. With careful design the camera can be
positioned so that its optical path is closed to that of the user's eye; the video image
thus approximates what the user would normally see. As with optical see through
displays, a separate system can be provided for each eye to support stereo vision.
Video composition can be done in more than one way. A simple way is to use
chroma-keying: a technique used in many video special effects. The background of
the computer graphics images is set to a specific color, say green, which none of the
virtual objects use. Then the combining step replaces all green areas with the
corresponding parts from the video of the real world. This has the effect of
superimposing the virtual objects over the real world. A more sophisticated
composition would use depth information at each pixel for the real world images; it
could combine the real and virtual images by a pixel-by-pixel depth comparison. This
would allow real objects to cover virtual objects and vice-versa. Comparison of
optical sec through and video see through displays Each of approaches to see through
display design has its pluses and minuses. Optical see-through systems allows the user
to see the real world with solution and field of view. But the overlaid graphics in
current optical see through systems are not opaque and therefore cannot completely
obscure the physical objects behind them. As result, the superimposed text may be
hard to read against some backgrounds, and three-dimensional graphics may not
produce a convincing illusion. Furthermore, although focus physical objects
depending on their distance, virtual objects are alt focused in the plane of the display.
This means that a virtual object that is intended to be at the same position as a
physical object may have a geometrically correct projection, yet the user may not be
able to view both objects in focus at the same time. In video see-through systems,
virtual objects can fully obscure physical ones and can be combined with them using a
rich variety of graphical effects. There is also discrepancy between how the eye
focuses virtual and physical objects, because both are viewed on same plane, the
limitations of current video technology, however, mean that the quality of the visual
experience of the real world is significantly decreased, essentially to the level of the
synthesized graphics, with everything focusing the same apparent distance. At
present, a video camera and display is of no match for the human eye.
Fig 7: A conceptual diagram of a video see-through HMD [Ref 7]
15
An optical approach has the following advantages over a video
approach:
1. Simplicity:
Optical blending is simpler and cheaper than video blending. Optical approaches have
only one "stream" of video to worry about: the graphic images. The real world is seen
directly through the combiners, and that time delay is generally a few nanoseconds.
Video blending, on the other hand, must deal with separate video streams for the real
and virtual images. The two streams of real and virtual images must be properly
synchronized or temporal distortion results. Also, optical see through HMD's with
narrow field of view combiners offer views of the real world that have little distortion.
Video cameras almost always have some amount of distortion that must be
compensated for, along with any distortion from the optics in front of the display
devices. Since video requires cameras and combiners that optical approaches do not
need, video will probably be more expensive and complicated to build than optical
based systems.
2. Resolution:
Video blending limits the resolution of what the user sees, both real and virtual, to the
resolution of the display devices. With current displays, this resolution is far less than
the resolving power of the fovea. Optical see-through also shows the graphic images
at the resolution of the display devices, but the user's view of the real world is not
degraded. Thus, video reduces the resolution of the real world, while optical see-
through docs not.
3. Safety:
Video see-through HMD's arc essentially modified closed-view HMD's, If the power
is cut off, the user is effectively blind. This is a safety concern in some applications.
In contrast, when power is removed from an optical see-through HMD, the user still
has a direct view of the real world. The HMD then becomes a pair of heavy
sunglasses, but the user can still see.
4. No eye offset:
With video see-through, the user's view of the real world is provided by the video
cameras. In essence, this puts his "eyes" where the video cameras are not located
exactly where the user's eyes are, creating an offset between the cameras and the real
eyes. The distance separating the cameras may also not be exactly the same as the
user's interpupillary distance (IPD). This difference between camera locations and eye
locations introduces displacements from what the user sees compared to what he
expects to see. For example, if the cameras are above the user's eyes, he will see the
world from a vantage point slightly taller than he is used to.
16
Video blending offers the following advantages over optical
blending :
1. Flexibility in composition strategies:
A basic problem with optical see-through is that the virtual objects do not completely
obscure the real world objects, because the optical combiners allow light from both
virtual and real sources. Building an optical see-through HMD that can selectively
shut out the light from the real world is difficult. Any filter that would selectively
block out light must be placed in the optical path at a point where the image is in
focus, which obviously cannot be the user's eye. Therefore, the optical system must
have two places where the image is in focus: at the user's eye and the point of the
hypothetical filter. This makes the optical design much more difficult and complex.
No existing optical see-through HMD blocks incoming light in this fashion. Thus, the
virtual objects appear Ghost-like and semi-transparent. This damages the illusion of
reality because occlusion is one of the strongest depth cues. In contrast, video see-
through is far more flexible about how it merges the real and virtual images. Since
both the real and virtual are available in digital form, video see-through compositors
can, on a pixel-by-pixel basis, take the real, or the virtual, or some blend between the
two to simulate transparency.
2. Wide field-of-view:
Distortions in optical systems are a function of the radial distance away from the
optical axis. The further one looks away from the center of the view, the larger the
distortions get. A digitized image taken through a distorted optical system can be
undistorted by applying image processing techniques to unwrap the image, provided
that the optical distortion is well characterized . This requires significant amount of
computation, but this constraint will be less important in the future as computers
become faster. It is harder to build wide field-of-view displays with optical see-
through techniques. Any distortions of the user's view of the real world must be
corrected optically, rather than digitally, because the system has no digitized image of
the real world to manipulate. Complex optics is expensive and add weight to the
HMD. Wide field-of-view systems are an exception to the general trend of optical
approaches being simpler and cheaper than video approaches.
3. Real and virtual view delays can be matched:
Video offers an approach for reducing or avoiding problems caused by temporal
mismatches between the real and virtual images. Optical see-through HMD's offer an
almost instantaneous view of the real world but a delayed view of the virtual. This
temporal mismatch can cause problems. With video approaches, it is possible to delay
the video of the real world to match the delay from the virtual image stream.
4. Additional registration strategies:
In optical see-through, the only information the system has about the user's head
location comes from the head tracker. Video blending provides another source of
information: the digitized image of the real scene. This digitized image means that
17
video approaches can employ additional registration strategies unavailable to optical
approaches.
5. Easier to match the brightness of the real and virtual objects:
Both optical and video technologies have their roles, and the choice of technology
depends upon the application requirements. Many of the mismatch assembly and
repair prototypes use optical approaches, possibly because of the cost and safety
issues. If successful, the equipment would have to be replicated in large numbers to
equip workers on a factory floor. In contrast, most of the prototypes for medical
applications use video approaches, probably for the flexibility in blending real and
virtual and for the additional registration strategies offered.
1. Tracking and Orientation: The biggest challenge facing developers of
augmented reality the need to know where the user is located in reference to his or her
surroundings. There's also the additional problem of tracking the movement of users
eyes and heads. A tracking system has to recognize these movements and project the
graphics related to the real-world environment the user is seeing at any given
movement. Currently both video see-through and optical see-through displays
optically have lag in the overlaid material due to the tracking technologies currently
available. Now tracking is of two types:
a. Indoor Tracking : Tracking is easier in small spaces than in large spaces, trackers
typically have two parts: one worn by the tracked person or object and other built into
the surrounding environment, usually within the same room. In optical trackers, the
targets - LED's or reflectors, for instance can be attached to the tracked person or to
the object, and an array of optical sensors can be embedded in the room's ceiling.
Alternatively the tracked users can wear the sensors, and targets can be fixed to the
ceiling. By calculating the distance to reach visible target, the sensors can determine
the user's position and orientation. Researchers at the University of North Carolina-
Chapel Hill have developed a very precise system that works within 500 sq feet. The
HiBall Tracking System is an optoelectronic tracking system made of two parts: Six
user-mounted, optical sensors. Infrared-light-emitting diodes (LED's) embedded in
special ceiling panels. The system uses the known location of LED's the known
geometry of the user-mounted optical sensors and a special algorithm to compute and
report the user's position and orientation. The system resolves linear motion of less
than 0.2 millimeters, and angular motions less than 0.03 degrees. It has an update rate
of more than 1500Hz, and latency is kept at about one millisecond. In everyday life,
people rely on several senses-including what they see. In a similar fashion. "Hybrid
Trackers" draw on several sources of sensory information. For example, the wearer of
an AR display can be equipped with inertial sensors (gyroscope and accelerometers)
to record changes in head orientation. Combining this information with data from
optical, video or ultrasonic devices greatly improve the accuracy of tracking.
18
b. Outdoor Tracking: Here head orientation is determined with a commercially
available hybrid tracker that combines gyroscopes and accelerometers with
magnetometers that measure the earth's magnetic field. For position tracking we take
advantage of a high-precision version of the increasingly popular Global Positioning
system receiver. A GPS receiver can determine its position by monitoring radio
signals from navigation satellites. GPS receivers have an accuracy of about 10 to 30
meters. An augmented reality, system would be worthless if the graphics projected
were of something 10 to 30 meters away from what you were actually looking at.
User can get better result with a technique known as differential GPS. In this method,
the mobile GPS receiver also monitors signals from another GPS receiver and a radio
transmitter at a fixed location on the earth. This transmitter broadcasts the correction
based on the difference between the stationary GPS antenna's known and computed
positions. By using these signals to correct the satellite signals, the differential GPS
can reduce the margin of error to less than one meter. The system is able to achieve
the centimeter-level accuracy by employing the real-time kinematics GPS, a more
sophisticated form of differential GPS that also compares the phases of the signals at
the fixed and mobile receivers. Trimble Navigation reports that they have increased
the precision of their global positioning system (GPS) by replacing local reference
stations with what they term a Virtual Reference Station (VRS), This new VRS will
enable users to obtain a centimeter-level positioning without local reference stations;
it can achieve long-range, real-time kinematics (RTK) precision over greater distances
via wireless communications wherever they are located. Real-time kinematics
technique is a way to use GPS measurements to generate positioning within one to
two centimeters (0,39 to 0.79 inches). RTK is often used as the key component in
navigational system or automatic machine guidance. Unfortunately, GPS is not the
ultimate answer to position tracking. The satellite signals are relatively weak and
easily blocked by buildings or even foliage. This rule out useful tracking indoors or in
places likes midtown Manhattan, where rows of tall building block most of the sky.
GPS tracking works well in wide open spaces and relatively low buildings.
2. Mobile Computing Power: For a wearable augmented realty system,
there is still not enough computing power to create stereo 3-D graphics. So
researchers are using whatever they can get out of laptops and persona! Computers,
for now. Laptops are just now starting to be equipped with graphics processing unit
(CPU's), Toshiba just now added a NVIDIA to their notebooks that is able to process
more than 17-miilion triangles per second and 286-miIlion pixels per second, which
can enable CPU-intensive programs, such as 3D games. But still notebooks lag far
behind-NVID1A has developed a custom 300-MHz 3-D graphics processor for
Microsoft's Xbox game console that can produce 150 million polygon per second—
and polygons are more complicated than triangles. So you can see how far mobiles
graphics chips have to go before they can create smooth graphics like the ones you
see on your home video-game system.
19
WORKING PRINCIPAL
Block Diagram:
1.Tracking : Getting the right information at the right time and the right place is the key in
all these applications. Personal digital assistants such as the Palm and the Pocket PC can
provide timely information using wireless networking and Global Positioning System (GPS)
receivers that constantly track the handheld devices
2. Environment Sensing: It is the process of viewing or sensing the real world scenes or
even physical environment which can be done either by using an optical combiner, a video
combiner or simply retinal view.
3. Visualization and Rendering : Some emerging trends in the recent development of
human-computer interaction (HCI) can be observed. The trends are augmented reality,
computer supported cooperative work, ubiquitous computing, and heterogeneous user
interface. AR is a method for visual improvement or enrichment of the surrounding
environment by overlaying spatially aligned computer-generated information onto a human's
view (eyes).
4. Display: This corresponds to head mounted devices where images are formed. Many
objects that do not exist in the real world can be put into this environment and users can view
and exam on these objects. The properties such as complexity, physical properties etc. are
just parameters in simulation.
Tracking
Environment Sensing
Visualization and Rendering
Display
20
ANALYSIS
Only recently have the capabilities of real-time video image processing, computer graphic
systems and new display technologies converged to make possible the display of a virtual
graphical image correctly registered with registered with a view of the 3D environment
surrounding the user. Researchers working with augmented reality systems have proposed
them as solutions in many domains. The areas that have been discussed range from
entertainment to military training. Many of the domains, such as medical are also proposed
for traditional virtual reality systems.
1. Medical: This domain is viewed as one of the more important for augmented reality
systems. Most of the medical applications deal with image guided surgery. Pre-operative
imaging studies, such as CT or MRI scans, of the patient provide the surgeon with the
necessary view of the internal anatomy. From these images the surgery is planned.
Visualization of the path through the anatomy to the affected area where, for example, a
tumor must be removed is done by first creating a 3D model from the multiple views and
slices in the preoperative study. Being able to accurately register the images at this point will
enhance the performance of the surgical team and eliminate the need for the painful and
cumbersome stereo tactic frames.
2. Entertainment: A simple form of augmented reality has been in use in the entertainment
and news business for quite some time. Whenever we are watching the evening weather
report the weather reporter is shown standing in front of changing weather maps. In the studio
the reporter is actually standing in front of a blue or green screen. This real image is
augmented with computer generated maps using a technique called chroma-keying. It is also
possible to create a virtual studio environment so that the actors can appear to be positioned
in a studio with computer generated decorating . In this the environments are carefully
modeled ahead of time, and the cameras are calibrated and precisely tracked. For some
applications, augmentations are added solely through real-time video tracking. Delaying the
video broadcast by a few video frames eliminates the registration problems caused by system
latency. Furthermore, the predictable environment (uniformed players on a green, white, and
brown field) lets the system use custom chroma-keying techniques to draw the yellow line
only on the field rather than over the players. With similar approaches, advertisers can
embellish broadcast video with virtual ads and product placements.
3. Military Training: The military has been using displays in cockpits that present
information to the pilot on the windshield of the cockpit or the visor of their flight helmet.
This is a form of augmented reality display. By equipping military personnel with helmet
mounted visor displays or a special purpose rangefinder the activities of other units
participating in the exercise can be imaged. In wartime, the display of the real battlefield
scene could be augmented with annotation information or highlighting to emphasize hidden
enemy units.
21
4. Engineering Design: Imagine that a group of designers are working on the model of a
complex device for their clients. The designers and clients want to do a joint design review
even though they are physically separated. If each of them had a conference room that was
equipped with an augmented reality display this could be accomplished. The physical
prototype that the designers have mocked up is imaged and displayed in the client's
conference room in 3D. The clients can walk around the display looking at different aspects
of it
5. Robotics and Telerobotics: In the domain of robotics and telerobotics an augmented
display can assist the user of the system. A telerobotic operator uses a visual image of the
remote workspace to guide the robot. Annotation of the view would still be useful just as it is
when the scene is in front of the operator. There is an added potential benefit. The robot
motion could then be executed directly which in a telerobotics application would eliminate
any oscillations caused by long delays to the remote site.
6. Manufacturing, Maintenance and Repair: Recent advances in computer interface
design, and the ever increasing power and miniaturization of computer hardware, have
combined to make the use of augmented reality possible in demonstration test beds for
building construction, maintenance and renovation. When the maintenance technician
approaches a new or unfamiliar piece of equipment instead of opening several repair manuals
they could put on an augmented reality display. In this display the image of the equipment
would be augmented with annotations and information pertinent to the repair. The military
has developed a wireless vest worn by personnel that is attached to an optical see-through
display. The wireless connection allows the soldier to access repair manuals and images of
the equipment. Future versions might register those images on the live scene and provide
animation to show the procedures that must be performed.
7. Consumer Design: Virtual reality systems are already used for consumer design. Using
perhaps more of a graphics system than virtual reality, when you go to the typical home store
wanting to add a new deck to your house, they will show you a graphical picture of what the
deck will look like When we head into some high-tech beauty shops today you can see what a
new hair style would look like on a digitized image of yourself. But with an advanced
augmented reality system you would be able to see the view as you moved. If the dynamics
of hair are included in the description of the virtual object you would also see the motion of
your hair as your head moved.
8. Augmented mapping: Paper maps can be brought to life using hardware that adds up-to-
the- minute information, photography and even video footage. Using AR technique the
system, which augments an ordinary tabletop map with additional information by projecting
it onto the map’s surface. can be implemented. It would help emergency workers and have
developed a simulation that projects live information about flooding and other natural
calamities. The system makes use of an overhead camera and image recognition software on
a connected computer to identify the region from the map’s topographical features. An
overhead projector then overlays relevant information - like the location of a traffic accident
or even the position of a moving helicopter - onto the map.
22
Advantages and Limitations
Advantages:
 Augmented Reality is set to revolutionize the mobile user experience as did gesture
and touch (multi-modal interaction) in mobile phones. This will redefine the mobile
user experience for the next generation making mobile search invisible and reduce
search effort for users.
 Augmented Reality, like multi-modal interaction (gestural interfaces) has a long
history of usability research, analysis and experimentation and therefore has a solid
history as an interface technique.
 Augmented Reality improves mobile usability by acting as the interface itself,
requiring little interaction. Imagine turning on your phone or pressing a button where
the space, people, objects around you are “sensed” by your mobile device- giving you
location based or context sensitive information on the fly.
Limitations:
 Current performance levels (speed) on today’s iPhone or similar touch devices like
the Google G1 will take a few generations to make Augmented Reality feasible as a
general interface technique accessible to the general public.
 Content may obscure or narrow a user’s interests or tastes. For example, knowing
where McDonald’s or Starbucks is in Paris or Rome might not interest users as much
as “off the beaten track information” that you might seek out in travel experiences.
 Privacy control will become a bigger issue than with today’s information saturation
levels. Walking up to a stranger or a group of people might reveal status, thoughts
(Tweets), or other information that usually comes with an introduction, might cause
unwarranted breaches of privacy.
23
MODIFICATIONS
 Size of the head gears (hmd):
In times it is very inconvenient to wear those large head gears and go out somewhere.
Instead those gears could be converted into small contact lenses (eye lens). Through
that lens the user can operate a computer in front of his eye all the time.
 More accurate sensing:
For a better experience the sensors must be accurate and well calibrated because the
output experience which is provided to the user totally depends upon the action of the
sensors. So it is preferred for the sensors to have minimum error percentage in data
capturing.
 Thin film battery:
The device must be wireless for the convenience of the user and we also need battery
for the power supply and that battery must be thin filmed to cut the size and weight of
the devices.
 Vision aid:
The AR devices could be used as an aid for the people having poor vision like myopia
, hypermetropia , by zooming in or out whatever the user sees via that device, as
required by the user. For the colour blind users , the device can simply mark that
coloured object and show the name of the colour in text form to the user.
24
CONCLUSION
Augmented reality (AR) is far behind Virtual Environments in maturity. Several commercial vendors
sell complete, turnkey Virtual Environment systems. However, no commercial vendor currently sells
an HMD-based Augmented Reality system. A few monitor-based "virtual set" systems are available,
but today AR systems are primarily found in academic and industrial research laboratories.
The first deployed HMD-based AR systems will probably be in the application of aircraft
manufacturing. Both Boeing and McDonnell Douglas are exploring this technology. The former uses
optical approaches, while the letter is pursuing video approaches. Boeing has performed trial runs
with workers using a prototype system but has not yet made any deployment decisions. Annotation
and visualization applications in restricted, limited range environments are deployable today,
although much more work needs to be done to make them cost effective and flexible.
Applications in medical visualization will take longer. Prototype visualization aids have been used on
an experimental basis, but the stringent registration requirements and ramifications of mistakes will
postpone common usage for many years. AR will probably be used for medical training before it is
commonly used in surgery.
The next generation of combat aircraft will have Helmet Mounted Sights with graphics registered to
targets in the environment. These displays, combined with short-range steer able missiles that can
shoot at targets off-bore sight, give a tremendous combat advantage to pilots in dogfights. Instead
of having to be directly behind his target in order to shoot at it, a pilot can now shoot at anything
within a 60-90 degree cone of his aircraft's forward centerline. Russia and Israel currently have
systems with this capability, and the U.S is expected to field the AIM-9X missile with its associated
Helmet-mounted sight in 2002.
Augmented Reality is a relatively new field, where most of the research efforts have occurred in the
past four years. Because of the numerous challenges and unexplored avenues in this area, AR will
remain a vibrant area of research for at least the next several years. After the basic problems with
AR are solved, the ultimate goal will be to generate virtual objects that are so realistic that they are
virtually indistinguishable from the real environment. Photorealism has been demonstrated in
feature films, but accomplishing this in an interactive application will be much harder. Lighting
conditions, surface reflections, and other properties must be measured automatically , in real time.
25
FUTURE SCOPE
 AR based medical training
 AR based 3D sketching kits for children
 AR based 3D CAD software
 AR based simulating software
 AR based tracking devices
 Fear conquering device
 Vision aid
 Could be used in driverless cars for better assistance
Fig 8: Futuristic AR Interface [Ref 8]
26
REFERENCE
1. Augmented reality Wikipedia,http://en.wikipedia.org/wiki/Augmented_reality
Augmented reality: A practical guide. (2008)
2. http://media.pragprog.com/titles/cfar/intro.pdf
http://arcadia.eafit.edu.co/Publications/AugmentedRealityIADATEnglish.pdf
3. http://upcommons.upc.edu/eprints/bitstream/2117/9839/1/IEM_number16_WN.2.pdf
International Conference on EngineeringEducation & Research, Korea, (2009).
4. http://robot.kut.ac.kr/papers/DeveEduVirtual.pdf Jochim, S., Augmented Reality in
Modern Education
5. N. Norouzi et al., "Walking Your Virtual Dog: Analysis of Awareness and Proxemics
with Simulated Support Animals in Augmented Reality," 2019 IEEE International
Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 2019, pp.
157-168.
6. S. Thanyadit, P. Punpongsanon and T. Pong, "ObserVAR: Visualization System for
Observing Virtual Reality Users using Augmented Reality," 2019 IEEE International
Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 2019, pp.
258-268.
7. Designers. In E. Pearson & P. Bohman (Eds.), Proceedings of World
8. Conference on Educational Multimedia, Hypermedia
9. Dix, J., Finlay, J., Abowd, D., Beale, R.:Human-Computer Interaction.
10. Third Edition, Pearson: Prentice Hall Europe,(2004).
11. Valenzuela, D., Shrivastava, P.: Interview as a Method for Qualitative
Research.Presentation
12. http://www.public.asu.edu/~kroel/www500/ Interview%20Fri.pdf Thomas, W.: A
Review of ResearchonProject BasedLearning. March,
13. .http://www.bobpearlman.org/BestPractices/PBL_Research.pdf Shtereva, K., Ivanova,
M., Raykov, P.: Project BasedLearning
14. In Microelectronics: Utilizing ICAPP. Interactive Computer Aided Learning
27
15. K. Kim, M. Billinghurst, G. Bruder, H. B. Duh and G. F. Welch, "Revisiting Trends
in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–
2017)," in IEEE Transactions on Visualization and Computer Graphics, vol. 24, no.
11, pp. 2947-2962, Nov. 2018.
16. J. Alves, B. Marques, M. Oliveira, T. Araújo, P. Dias and B. S. Santos, "Comparing
Spatial and Mobile Augmented Reality for Guiding Assembling Procedures with Task
Validation," 2019 IEEE International Conference on Autonomous Robot Systems and
Competitions (ICARSC), Porto, Portugal, 2019, pp. 1-6.
17. M. G. de S Ribeiro, I. L. Mazuecos, F. Marinho and A. N. G. dos Santos, "Agile
Explorations in AR," IECON 2019 - 45th Annual Conference of the IEEE Industrial
Electronics Society, Lisbon, Portugal, 2019, pp. 2902-2909.
18. A. Erickson, K. Kim, R. Schubert, G. Bruder and G. Welch, "Is It Cold in Here or Is It
Just Me? Analysis of Augmented Reality Temperature Visualization for
ComputerMediated Thermoception," 2019 IEEE International Symposium on Mixed
and Augmented Reality (ISMAR), Beijing, China, 2019, pp. 202-211.
19. A. Erickson, K. Kim, R. Schubert, G. Bruder and G. Welch, "Is It Cold in Here or Is It
Just Me? Analysis of Augmented Reality Temperature Visualization for
ComputerMediated Thermoception," 2019 IEEE International Symposium on Mixed
and Augmented Reality (ISMAR), Beijing, China, 2019, pp. 202-211.
20. L. Qian, J. Y. Wu, S. P. DiMaio, N. Navab and P. Kazanzides, "A Review of
Augmented Reality in Robotic-Assisted Surgery," in IEEE Transactions on Medical
Robotics and Bionics, vol. 2, no. 1, pp. 1-16, Feb. 2020.
1) Fig 1 - https://www.digitalartsonline.co.uk [Ref 1]
2) Fig 2 - https://whatis.techtarget.com [Ref 2]
3) Fig 3 - https://www.bustle.com [Ref 3]
4) Fig 4 - https://www.oberlo.in [Ref 4]
5) Fig 5 - https://researchgate.net [Ref 5]
6) Fig 6 - https://www.techrepublic.com [Ref 6]
7) Fig 7 - https://www.re-flekt.com [Ref 7]
8) Fig 8 - https://www.wired.com [Ref 8]

Contenu connexe

Tendances

Business Perspective- Augmented Reality
Business Perspective- Augmented RealityBusiness Perspective- Augmented Reality
Business Perspective- Augmented Realityashua12
 
augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentationVaibhav Mehta
 
Augmented Reality presentation
Augmented Reality presentationAugmented Reality presentation
Augmented Reality presentationTed Dinsmore
 
Augmented reality report
Augmented reality reportAugmented reality report
Augmented reality reportSatyendra Gupta
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented RealityAjay Sankar
 
Augmented Reality - PPT
Augmented Reality - PPTAugmented Reality - PPT
Augmented Reality - PPTRahul John
 
Augmented Reality pdf
Augmented Reality pdf Augmented Reality pdf
Augmented Reality pdf Qualcomm
 
Augmented reality using Triggered by Image Recognition
Augmented reality using Triggered by Image RecognitionAugmented reality using Triggered by Image Recognition
Augmented reality using Triggered by Image RecognitionNilesh Pawar
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented RealityRaj K
 
Augmented Reality e-books
Augmented Reality e-booksAugmented Reality e-books
Augmented Reality e-booksAbhishek Kumar
 
Human Computer Interface Augmented Reality
Human Computer Interface Augmented RealityHuman Computer Interface Augmented Reality
Human Computer Interface Augmented Realityijtsrd
 
Augmented reality the evolution of human computer interaction
Augmented reality the evolution of human computer interactionAugmented reality the evolution of human computer interaction
Augmented reality the evolution of human computer interactionBello Abubakar
 

Tendances (20)

Business Perspective- Augmented Reality
Business Perspective- Augmented RealityBusiness Perspective- Augmented Reality
Business Perspective- Augmented Reality
 
Sample seminar report
Sample seminar reportSample seminar report
Sample seminar report
 
augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentation
 
Augmented Reality presentation
Augmented Reality presentationAugmented Reality presentation
Augmented Reality presentation
 
Augmented reality report
Augmented reality reportAugmented reality report
Augmented reality report
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented Reality - PPT
Augmented Reality - PPTAugmented Reality - PPT
Augmented Reality - PPT
 
Augmented Reality pdf
Augmented Reality pdf Augmented Reality pdf
Augmented Reality pdf
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality using Triggered by Image Recognition
Augmented reality using Triggered by Image RecognitionAugmented reality using Triggered by Image Recognition
Augmented reality using Triggered by Image Recognition
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented Reality e-books
Augmented Reality e-booksAugmented Reality e-books
Augmented Reality e-books
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Human Computer Interface Augmented Reality
Human Computer Interface Augmented RealityHuman Computer Interface Augmented Reality
Human Computer Interface Augmented Reality
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented reality the evolution of human computer interaction
Augmented reality the evolution of human computer interactionAugmented reality the evolution of human computer interaction
Augmented reality the evolution of human computer interaction
 

Similaire à Augmented Reality Report by Singhan Ganguly

What Is Augmented Reality or AR?
What Is Augmented Reality or AR?What Is Augmented Reality or AR?
What Is Augmented Reality or AR?Sushil Deshmukh
 
Augmented reality (AR) is an enhanced version of the real physical world
Augmented reality (AR) is an enhanced version of the real physical worldAugmented reality (AR) is an enhanced version of the real physical world
Augmented reality (AR) is an enhanced version of the real physical worldMeghaGambhire
 
Notes for AR.ppt
Notes for AR.pptNotes for AR.ppt
Notes for AR.pptArumugam90
 
Augmented reality and virtual reality technology
Augmented reality and virtual reality technologyAugmented reality and virtual reality technology
Augmented reality and virtual reality technologyAMAN148668
 
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICT
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICTAugmented reality : Possibilities and Challenges - An IEEE talk at DA-IICT
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICTParth Darji
 
Augmented reality technical presentation
 Augmented reality technical presentation Augmented reality technical presentation
Augmented reality technical presentationsairamgoud16
 
Agumented Reality - Sai Kiran Kasireddy
Agumented Reality - Sai Kiran KasireddyAgumented Reality - Sai Kiran Kasireddy
Agumented Reality - Sai Kiran KasireddySai Kiran Kasireddy
 
Augmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem BhuttoAugmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem BhuttoJaseem Bhutto
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality pptSourav Rout
 
Augmented reality technology
Augmented reality technologyAugmented reality technology
Augmented reality technologydeorwine infotech
 
Augmented reality
Augmented realityAugmented reality
Augmented realityNitin Meena
 
Mixede reality project report
Mixede reality project reportMixede reality project report
Mixede reality project reportsanamsanam7
 
AugmentedReality_35.docx
AugmentedReality_35.docxAugmentedReality_35.docx
AugmentedReality_35.docxAbhiS66
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality pptDark Side
 

Similaire à Augmented Reality Report by Singhan Ganguly (20)

Augmented reality(ar) seminar
Augmented reality(ar) seminarAugmented reality(ar) seminar
Augmented reality(ar) seminar
 
What Is Augmented Reality or AR?
What Is Augmented Reality or AR?What Is Augmented Reality or AR?
What Is Augmented Reality or AR?
 
Augmented reality (AR) is an enhanced version of the real physical world
Augmented reality (AR) is an enhanced version of the real physical worldAugmented reality (AR) is an enhanced version of the real physical world
Augmented reality (AR) is an enhanced version of the real physical world
 
20n05a0418 ppt.pptx
20n05a0418 ppt.pptx20n05a0418 ppt.pptx
20n05a0418 ppt.pptx
 
Notes for AR.ppt
Notes for AR.pptNotes for AR.ppt
Notes for AR.ppt
 
Augmented reality and virtual reality technology
Augmented reality and virtual reality technologyAugmented reality and virtual reality technology
Augmented reality and virtual reality technology
 
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICT
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICTAugmented reality : Possibilities and Challenges - An IEEE talk at DA-IICT
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICT
 
Augmented reality technical presentation
 Augmented reality technical presentation Augmented reality technical presentation
Augmented reality technical presentation
 
augmented_reality.ppt
augmented_reality.pptaugmented_reality.ppt
augmented_reality.ppt
 
Agumented Reality - Sai Kiran Kasireddy
Agumented Reality - Sai Kiran KasireddyAgumented Reality - Sai Kiran Kasireddy
Agumented Reality - Sai Kiran Kasireddy
 
Augmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem BhuttoAugmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem Bhutto
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
 
Augmented reality technology
Augmented reality technologyAugmented reality technology
Augmented reality technology
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Mixede reality project report
Mixede reality project reportMixede reality project report
Mixede reality project report
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
U - 5 Emerging.pptx
U - 5 Emerging.pptxU - 5 Emerging.pptx
U - 5 Emerging.pptx
 
AugmentedReality_35.docx
AugmentedReality_35.docxAugmentedReality_35.docx
AugmentedReality_35.docx
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
 
Vr & ar 1
Vr & ar 1Vr & ar 1
Vr & ar 1
 

Dernier

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 

Dernier (20)

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 

Augmented Reality Report by Singhan Ganguly

  • 1. 1 Project Report On AUGMENTED REALITY(AR) By SINGHAN GANGULY Department of Electronics & Communication Engineering Future Institute of Engineering & Management Sonarpur Station Road, Kolkata-700150 2020
  • 2. 2 Future Institute of Engineering & Management Affiliated to Maulana Abul Kalam Azad University of Technology ( formerly West Bengal University of Technology ) Certificate of Approval The foregoing seminar report is hereby approved as a creditable study of an engineering subject carried out and presented in a manner satisfactory to warrant its acceptance as a prerequisite to the degree for which it has been submitted. It is understood that by this approval the undersigned don’t necessarily endorse or approve any statement made opinion expressed or conclusion drawn therein but approve the report only for the purpose for which it is submitted. Signature of the Examiners: 1……………………………………………. 2………………………………………………………… 3………………………………………………………….. 4………………………………………………………….. 5……………………………………………………….
  • 3. 3 Acknowledgement It is a great pleasure to express my deepest gratitude and indebtedness to my respected Professors of Department of Electronics and Communication Engineering, Future Institute of Engineering & Management, Kolkata for their supervision, constant help and encouragement throughout the entire period. I am very much grateful to Prof. (Dr.) Dipankar Ghosh (Head of the Department), Department of Electronics and Communication Engineering, Future Institute of Engineering & Management, for necessary suggestions and helpful scientific discussions to improve the quality this thesis. Singhan Ganguly Department of Electronics & Communication Engineering, Future Institute of Engineering & Management
  • 4. 4 INDEX 1. Introduction Pg. 6 2. Overview Pg. 7 3. Working Principal Pg. 19 4. Analysis Pg. 20 5. Modification Pg. 23 6. Conclusion Pg. 24 7. Future Scope Pg. 25 8. Reference Pg. 26
  • 5. 5 LIST OF FIGURES FIGURE NO. FIGURE NAME Fig 1 AR Based Workspace Fig 2 Pokemon Go Game Interface Fig 3 Houzz Application Interface Fig 4 Youcam Makeup Application Interface Fig 5 Projection Based AR Dialer Fig 6 Optical see-through HMD conceptual diagram Fig 7 A conceptual diagram of a video see-through HMD Fig 8 Futuristic AR Interface
  • 6. 6 INTRODUCTION Technology has advanced to the point where realism in virtual reality is very achievable. However, in our obsession to reproduce the world and human experience in virtual space, we overlook the most important aspects of what makes us who we are—our reality. Yet, it isn’t enough just to trick the eye or fool the body and mind. One must capture the imagination in order to create truly compelling experiences. On the spectrum between virtual reality, which creates immersible, computer-generated environments, and the real world, augmented reality is closer to the real world. Augmented reality adds graphics, sounds, haptics and smell to the natural world as it exists. We can expect video games to drive the development of augmented reality, but this technology will have countless applications. Everyone from tourists to military troops will benefit from the ability to place computer-generated graphics in their field of vision. Augmented reality will truly change the way we view the world. Picture oneself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. In this article, we will take a look at this future technology, its components and how it will be used. With the introduction of Augmented Reality (AR) as being coined the term in the early nineties, we were able to apply virtual objects within physical reality. Combining techniques of Sutherland/Sproull’s first optical see-through Head Mounted Display (HMD) from the early 1960’s with complex, real-time computer-generated wiring diagrams and manuals. Both were registered with each other and manuals were embedded within the actual aircraft for intensely detailed procedures. Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. Getting the right information at the right time and the right place is the key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one. Augmented reality is far more advanced than any technology you've seen in television broadcasts, although early versions of augmented reality are starting to appear in televised races and football games. These systems display graphics for only one point of view. Next- generation augmented- reality systems will display graphics for each viewer's perspective.
  • 7. 7 OVERVIEW DEFINITION : Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. It is a method for visual improvement or enrichment of the surrounding environment by overlaying spatially aligned computer-generated information onto a human's view (eyes) Augmented Reality (AR) was introduced as the opposite of virtual reality: instead of immersing the user into a synthesized, purely informational environment, the goal of AR is to augment the real world with information handling capabilities. AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. An AR system adds virtual computer-generated objects, audio and other sense enhancements to a real-world environment in real-time. These enhancements are added in a way that the viewer cannot tell the difference between the real and augmented world. PROPERTIES : AR system to have the following properties: 1. Combines real and virtual objects in a real environment. 2. Runs interactively, and in real time. 3. Registers (aligns) real and virtual objects with each other. Definition of AR to particular display technologies, such as a head mounted display (HMD). Nor do we limit it to our sense of sight. AR can potentially apply to all senses, including hearing, touch, and smell. HISTORY : The beginnings of AR, as we define it, date back to Sutherland’s work in the 1960s, which used a see-through HMD to present 3D graphics. However, only over the past decade has there been enough work to refer to AR as a research field. In 1997, Azuma published a survey that defined the field, described many problems, and summarized the developments up to that point. Since then, AR’s growth and progress have been remarkable. In the late 1990s, several conferences on AR began, including the international Workshop and Symposium on Augmented Reality, the International Symposium on Mixed Reality, and the Designing Augmented Reality Environments workshop. Some well-funded organizations formed that focused on AR, notably the Mixed Reality Systems Lab in Japan and the Arvika consortium in Germany.
  • 8. 8 SOME EXAMPLES: Augmented reality uses existing reality and physical objects to trigger computer-generated enhancements over the top of reality, in real time. Essentially, AR is a technology that lays computer-generated images over a user’s view of the real world. These images typically take shape as 3D models, videos and information. How this is overlaid depends on the nature of the experience - and the hardware you’re looking at the experience on. The simplest way is using your phone - where what you see through the camera has digital elements added to it. There are plenty of opportunities for designers and brands to explore the possibilities of augmented reality (AR) and how it can enhance creative work, provide entertainment or better society. Tech giants such as Microsoft, Google and Apple – but also everything from children’s books to 3D modelling for gaming – are experimenting with AR. The abundance of free content creation apps are democratising AR, which means anyone (not just developers) can create their own AR experiences. Fig 1: AR Based Workspace [Ref 1]
  • 9. 9 1. AR Game (Pokemon Go): Pokémon GO, considered the breakthrough AR app for gaming, uses a smartphone’s camera, gyroscope, clock and GPS and to enable a location-based augmented reality environment. A map of the current environment displays on the screen and a rustle of grass indicates the presence of a Pokémon; a tap of the touchscreen brings up the capture display. In AR mode, the screen displays Pokémon in the user’s real-world environment. The algorithm is as follows:  Location Tracking: Much like Google Maps or Wayze, "Pokemon Go" tracks your phone's location using GPS, integrating this information with an in-game map.  Deciding Where (And When) Pokemon Appear: the game uses local geographic data to place the pokemons in appropriate habitats. the game uses your phone's clock to track the time, so if you're out hunting at night, you're more likely to see fairy or nocturnal types.  Visualizing The Pokemon: Pokemon Go uses your phone's camera to place an image of a Pokemon within your surroundings, and the GPS, accelerometer, and compass give the game an idea of which direction your phone is pointing toward. In this way ultimately a pokemon is caught. Fig 2: Pokemon Go Game Interface [Ref 2]
  • 10. 10 2. AR Based Furniture Application (Houzz App) : A great platform for home goods and furniture selling, Houzz is one of the top apps for planning interior layouts and design. Primarily a home improvement app, Houzz has ecommerce functionality, allowing users to browse and buy products in-app. The “View in My Room” feature uses the existing sensors and camera in the iPhone to produce AR images placing the selected furniture in a desired real time place within the user’s room. It even goes as far as showing what the product will look like in different lighting. The user can move your furniture around in 3D space, resize it to fit the room (which can be a bit of an issue, given that the size of a furniture item is kind of important and not something the user can change after the fact), and then take a snapshot of your scene to share with others (including the home remodelling professional also found on Houzz). This scene doesn’t stay anchored in the room, though. If the user moves the phone or tablet around, everything else moves, too. So this is essentially just a basic 3D editor with the camera image projected in the background. Fig 3: Houzz Application Interface [Ref 3]
  • 11. 11 3. AR Based Makeup Application (YouCam App): It is an AR based application for doing makeup all over the face to enhance the beauty of the person in real time . From the technology implementation, YouCam Makeup and other cosmetics apps are enabled with 3 core components:  Face tracking algorithms to detect and track the face for real-time experience  Neural networks to extract the needed area of the face e.g. lips, hair, skin, eyebrows, etc.  3d rendered that enables realistic makeup representation in terms of colour, lighting and texture (glossy, matte, etc.) In proprietary makeover solutions, these 3 components are joined into one virtual makeup. ‘ Fig 4: Youcam Makeup Application Interface [Ref 4]
  • 12. 12 4. Projection Based AR: Just like anything else projection based AR feels more attractive (at least as of now) compared to an AR app you can install on your phone. As is obvious by its name, projection based AR functions using projection onto objects. What makes it interesting is the wide array of possibilities. One of the simplest is projection of light on a surface. Speaking of lights, surfaces and AR, did we ever think those lines on your fingers (which divide each finger into three parts) can create 12 buttons? If we have a look at the image and you would quickly grasp what we’re talking about. The picture depicts one of the simplest uses of projection based AR where light is fired onto a surface and the interaction is done by touching the projected surface with hand. The detection of where the user has touched the surface is done by differentiating between an expected (or known) projection image and the projection altered by interference of user’s hand. Projection-based AR can build you a castle in air, or ma dialer on hand. Fig 5: Projection Based AR Dialer [Ref 5]
  • 13. 13 DIFFERENT AR MECHANISMS: AR system tracks the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world. Through this process, known as registration, graphics software can place a three dimensional image of a tea cup, for example on top of a real saucer and keep the virtual cup fixed in that position as the user moves about the room. AR systems employ some of the same hardware technologies used in virtual reality research, but there's a crucial difference, whereas virtual reality brashly aims to replace the real world, augmented reality respectfully supplement it. Augmented Reality is still in an early stage of research and development at various universities and high-tech companies. Eventually, possible by the end of this decade, we will see first mass-marketed augmented reality system, which one researcher calls "The Walkman of the 2 Is1 century". What augmented reality' attempts to do is not only super impose graphics over a real environment in real time, but also change those graphics to accommodate a user's head- and eye-movements, so that the graphics always fit and perspective. Here are the three components needed to make an augmented-reality system work: 1.Head-mounted display 2.Tracking system 3.Mobile computing power 1. Head-Mounted Display: Just as monitor allow us to see text and graphics generated by computers, head-mounted displays (HMD's) will enable us to view graphics and text created by augmented-reality systems. There are two basic types of HMD's: a. Optical see-through display : A simple approach to optical see-through display employs a mirror beam splitter- a half silvered mirror that both reflects and transmits light. If properly oriented in front of the user's eye, the beam splitter can reflect the image of a computer display into the user's line of sight yet still allow light from the surrounding world to pass through. Such beam splitters, which are called combiners, have long been used in head up displays for fighter-jet- pilots (and, more recently, for drivers of luxury- cars). Lenses can be placed between the beam splitter and the computer display to focus the image so that it appears at a comfortable viewing distance. If a display and optics are provided for each eye, the view can be in stereo. Sony makes a see-through display that some researchers use, called the "Glasstron". Fig 6: Optical see-through HMD conceptual diagram [Ref 6]
  • 14. 14 b. Video see-through displays : A video see through display uses video mixing technology, originally developed for television special effects, to combine the image from a head worn camera with synthesized graphics. The merged image is typically presented on an opaque head worn display. With careful design the camera can be positioned so that its optical path is closed to that of the user's eye; the video image thus approximates what the user would normally see. As with optical see through displays, a separate system can be provided for each eye to support stereo vision. Video composition can be done in more than one way. A simple way is to use chroma-keying: a technique used in many video special effects. The background of the computer graphics images is set to a specific color, say green, which none of the virtual objects use. Then the combining step replaces all green areas with the corresponding parts from the video of the real world. This has the effect of superimposing the virtual objects over the real world. A more sophisticated composition would use depth information at each pixel for the real world images; it could combine the real and virtual images by a pixel-by-pixel depth comparison. This would allow real objects to cover virtual objects and vice-versa. Comparison of optical sec through and video see through displays Each of approaches to see through display design has its pluses and minuses. Optical see-through systems allows the user to see the real world with solution and field of view. But the overlaid graphics in current optical see through systems are not opaque and therefore cannot completely obscure the physical objects behind them. As result, the superimposed text may be hard to read against some backgrounds, and three-dimensional graphics may not produce a convincing illusion. Furthermore, although focus physical objects depending on their distance, virtual objects are alt focused in the plane of the display. This means that a virtual object that is intended to be at the same position as a physical object may have a geometrically correct projection, yet the user may not be able to view both objects in focus at the same time. In video see-through systems, virtual objects can fully obscure physical ones and can be combined with them using a rich variety of graphical effects. There is also discrepancy between how the eye focuses virtual and physical objects, because both are viewed on same plane, the limitations of current video technology, however, mean that the quality of the visual experience of the real world is significantly decreased, essentially to the level of the synthesized graphics, with everything focusing the same apparent distance. At present, a video camera and display is of no match for the human eye. Fig 7: A conceptual diagram of a video see-through HMD [Ref 7]
  • 15. 15 An optical approach has the following advantages over a video approach: 1. Simplicity: Optical blending is simpler and cheaper than video blending. Optical approaches have only one "stream" of video to worry about: the graphic images. The real world is seen directly through the combiners, and that time delay is generally a few nanoseconds. Video blending, on the other hand, must deal with separate video streams for the real and virtual images. The two streams of real and virtual images must be properly synchronized or temporal distortion results. Also, optical see through HMD's with narrow field of view combiners offer views of the real world that have little distortion. Video cameras almost always have some amount of distortion that must be compensated for, along with any distortion from the optics in front of the display devices. Since video requires cameras and combiners that optical approaches do not need, video will probably be more expensive and complicated to build than optical based systems. 2. Resolution: Video blending limits the resolution of what the user sees, both real and virtual, to the resolution of the display devices. With current displays, this resolution is far less than the resolving power of the fovea. Optical see-through also shows the graphic images at the resolution of the display devices, but the user's view of the real world is not degraded. Thus, video reduces the resolution of the real world, while optical see- through docs not. 3. Safety: Video see-through HMD's arc essentially modified closed-view HMD's, If the power is cut off, the user is effectively blind. This is a safety concern in some applications. In contrast, when power is removed from an optical see-through HMD, the user still has a direct view of the real world. The HMD then becomes a pair of heavy sunglasses, but the user can still see. 4. No eye offset: With video see-through, the user's view of the real world is provided by the video cameras. In essence, this puts his "eyes" where the video cameras are not located exactly where the user's eyes are, creating an offset between the cameras and the real eyes. The distance separating the cameras may also not be exactly the same as the user's interpupillary distance (IPD). This difference between camera locations and eye locations introduces displacements from what the user sees compared to what he expects to see. For example, if the cameras are above the user's eyes, he will see the world from a vantage point slightly taller than he is used to.
  • 16. 16 Video blending offers the following advantages over optical blending : 1. Flexibility in composition strategies: A basic problem with optical see-through is that the virtual objects do not completely obscure the real world objects, because the optical combiners allow light from both virtual and real sources. Building an optical see-through HMD that can selectively shut out the light from the real world is difficult. Any filter that would selectively block out light must be placed in the optical path at a point where the image is in focus, which obviously cannot be the user's eye. Therefore, the optical system must have two places where the image is in focus: at the user's eye and the point of the hypothetical filter. This makes the optical design much more difficult and complex. No existing optical see-through HMD blocks incoming light in this fashion. Thus, the virtual objects appear Ghost-like and semi-transparent. This damages the illusion of reality because occlusion is one of the strongest depth cues. In contrast, video see- through is far more flexible about how it merges the real and virtual images. Since both the real and virtual are available in digital form, video see-through compositors can, on a pixel-by-pixel basis, take the real, or the virtual, or some blend between the two to simulate transparency. 2. Wide field-of-view: Distortions in optical systems are a function of the radial distance away from the optical axis. The further one looks away from the center of the view, the larger the distortions get. A digitized image taken through a distorted optical system can be undistorted by applying image processing techniques to unwrap the image, provided that the optical distortion is well characterized . This requires significant amount of computation, but this constraint will be less important in the future as computers become faster. It is harder to build wide field-of-view displays with optical see- through techniques. Any distortions of the user's view of the real world must be corrected optically, rather than digitally, because the system has no digitized image of the real world to manipulate. Complex optics is expensive and add weight to the HMD. Wide field-of-view systems are an exception to the general trend of optical approaches being simpler and cheaper than video approaches. 3. Real and virtual view delays can be matched: Video offers an approach for reducing or avoiding problems caused by temporal mismatches between the real and virtual images. Optical see-through HMD's offer an almost instantaneous view of the real world but a delayed view of the virtual. This temporal mismatch can cause problems. With video approaches, it is possible to delay the video of the real world to match the delay from the virtual image stream. 4. Additional registration strategies: In optical see-through, the only information the system has about the user's head location comes from the head tracker. Video blending provides another source of information: the digitized image of the real scene. This digitized image means that
  • 17. 17 video approaches can employ additional registration strategies unavailable to optical approaches. 5. Easier to match the brightness of the real and virtual objects: Both optical and video technologies have their roles, and the choice of technology depends upon the application requirements. Many of the mismatch assembly and repair prototypes use optical approaches, possibly because of the cost and safety issues. If successful, the equipment would have to be replicated in large numbers to equip workers on a factory floor. In contrast, most of the prototypes for medical applications use video approaches, probably for the flexibility in blending real and virtual and for the additional registration strategies offered. 1. Tracking and Orientation: The biggest challenge facing developers of augmented reality the need to know where the user is located in reference to his or her surroundings. There's also the additional problem of tracking the movement of users eyes and heads. A tracking system has to recognize these movements and project the graphics related to the real-world environment the user is seeing at any given movement. Currently both video see-through and optical see-through displays optically have lag in the overlaid material due to the tracking technologies currently available. Now tracking is of two types: a. Indoor Tracking : Tracking is easier in small spaces than in large spaces, trackers typically have two parts: one worn by the tracked person or object and other built into the surrounding environment, usually within the same room. In optical trackers, the targets - LED's or reflectors, for instance can be attached to the tracked person or to the object, and an array of optical sensors can be embedded in the room's ceiling. Alternatively the tracked users can wear the sensors, and targets can be fixed to the ceiling. By calculating the distance to reach visible target, the sensors can determine the user's position and orientation. Researchers at the University of North Carolina- Chapel Hill have developed a very precise system that works within 500 sq feet. The HiBall Tracking System is an optoelectronic tracking system made of two parts: Six user-mounted, optical sensors. Infrared-light-emitting diodes (LED's) embedded in special ceiling panels. The system uses the known location of LED's the known geometry of the user-mounted optical sensors and a special algorithm to compute and report the user's position and orientation. The system resolves linear motion of less than 0.2 millimeters, and angular motions less than 0.03 degrees. It has an update rate of more than 1500Hz, and latency is kept at about one millisecond. In everyday life, people rely on several senses-including what they see. In a similar fashion. "Hybrid Trackers" draw on several sources of sensory information. For example, the wearer of an AR display can be equipped with inertial sensors (gyroscope and accelerometers) to record changes in head orientation. Combining this information with data from optical, video or ultrasonic devices greatly improve the accuracy of tracking.
  • 18. 18 b. Outdoor Tracking: Here head orientation is determined with a commercially available hybrid tracker that combines gyroscopes and accelerometers with magnetometers that measure the earth's magnetic field. For position tracking we take advantage of a high-precision version of the increasingly popular Global Positioning system receiver. A GPS receiver can determine its position by monitoring radio signals from navigation satellites. GPS receivers have an accuracy of about 10 to 30 meters. An augmented reality, system would be worthless if the graphics projected were of something 10 to 30 meters away from what you were actually looking at. User can get better result with a technique known as differential GPS. In this method, the mobile GPS receiver also monitors signals from another GPS receiver and a radio transmitter at a fixed location on the earth. This transmitter broadcasts the correction based on the difference between the stationary GPS antenna's known and computed positions. By using these signals to correct the satellite signals, the differential GPS can reduce the margin of error to less than one meter. The system is able to achieve the centimeter-level accuracy by employing the real-time kinematics GPS, a more sophisticated form of differential GPS that also compares the phases of the signals at the fixed and mobile receivers. Trimble Navigation reports that they have increased the precision of their global positioning system (GPS) by replacing local reference stations with what they term a Virtual Reference Station (VRS), This new VRS will enable users to obtain a centimeter-level positioning without local reference stations; it can achieve long-range, real-time kinematics (RTK) precision over greater distances via wireless communications wherever they are located. Real-time kinematics technique is a way to use GPS measurements to generate positioning within one to two centimeters (0,39 to 0.79 inches). RTK is often used as the key component in navigational system or automatic machine guidance. Unfortunately, GPS is not the ultimate answer to position tracking. The satellite signals are relatively weak and easily blocked by buildings or even foliage. This rule out useful tracking indoors or in places likes midtown Manhattan, where rows of tall building block most of the sky. GPS tracking works well in wide open spaces and relatively low buildings. 2. Mobile Computing Power: For a wearable augmented realty system, there is still not enough computing power to create stereo 3-D graphics. So researchers are using whatever they can get out of laptops and persona! Computers, for now. Laptops are just now starting to be equipped with graphics processing unit (CPU's), Toshiba just now added a NVIDIA to their notebooks that is able to process more than 17-miilion triangles per second and 286-miIlion pixels per second, which can enable CPU-intensive programs, such as 3D games. But still notebooks lag far behind-NVID1A has developed a custom 300-MHz 3-D graphics processor for Microsoft's Xbox game console that can produce 150 million polygon per second— and polygons are more complicated than triangles. So you can see how far mobiles graphics chips have to go before they can create smooth graphics like the ones you see on your home video-game system.
  • 19. 19 WORKING PRINCIPAL Block Diagram: 1.Tracking : Getting the right information at the right time and the right place is the key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices 2. Environment Sensing: It is the process of viewing or sensing the real world scenes or even physical environment which can be done either by using an optical combiner, a video combiner or simply retinal view. 3. Visualization and Rendering : Some emerging trends in the recent development of human-computer interaction (HCI) can be observed. The trends are augmented reality, computer supported cooperative work, ubiquitous computing, and heterogeneous user interface. AR is a method for visual improvement or enrichment of the surrounding environment by overlaying spatially aligned computer-generated information onto a human's view (eyes). 4. Display: This corresponds to head mounted devices where images are formed. Many objects that do not exist in the real world can be put into this environment and users can view and exam on these objects. The properties such as complexity, physical properties etc. are just parameters in simulation. Tracking Environment Sensing Visualization and Rendering Display
  • 20. 20 ANALYSIS Only recently have the capabilities of real-time video image processing, computer graphic systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with registered with a view of the 3D environment surrounding the user. Researchers working with augmented reality systems have proposed them as solutions in many domains. The areas that have been discussed range from entertainment to military training. Many of the domains, such as medical are also proposed for traditional virtual reality systems. 1. Medical: This domain is viewed as one of the more important for augmented reality systems. Most of the medical applications deal with image guided surgery. Pre-operative imaging studies, such as CT or MRI scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating a 3D model from the multiple views and slices in the preoperative study. Being able to accurately register the images at this point will enhance the performance of the surgical team and eliminate the need for the painful and cumbersome stereo tactic frames. 2. Entertainment: A simple form of augmented reality has been in use in the entertainment and news business for quite some time. Whenever we are watching the evening weather report the weather reporter is shown standing in front of changing weather maps. In the studio the reporter is actually standing in front of a blue or green screen. This real image is augmented with computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating . In this the environments are carefully modeled ahead of time, and the cameras are calibrated and precisely tracked. For some applications, augmentations are added solely through real-time video tracking. Delaying the video broadcast by a few video frames eliminates the registration problems caused by system latency. Furthermore, the predictable environment (uniformed players on a green, white, and brown field) lets the system use custom chroma-keying techniques to draw the yellow line only on the field rather than over the players. With similar approaches, advertisers can embellish broadcast video with virtual ads and product placements. 3. Military Training: The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of their flight helmet. This is a form of augmented reality display. By equipping military personnel with helmet mounted visor displays or a special purpose rangefinder the activities of other units participating in the exercise can be imaged. In wartime, the display of the real battlefield scene could be augmented with annotation information or highlighting to emphasize hidden enemy units.
  • 21. 21 4. Engineering Design: Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design review even though they are physically separated. If each of them had a conference room that was equipped with an augmented reality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D. The clients can walk around the display looking at different aspects of it 5. Robotics and Telerobotics: In the domain of robotics and telerobotics an augmented display can assist the user of the system. A telerobotic operator uses a visual image of the remote workspace to guide the robot. Annotation of the view would still be useful just as it is when the scene is in front of the operator. There is an added potential benefit. The robot motion could then be executed directly which in a telerobotics application would eliminate any oscillations caused by long delays to the remote site. 6. Manufacturing, Maintenance and Repair: Recent advances in computer interface design, and the ever increasing power and miniaturization of computer hardware, have combined to make the use of augmented reality possible in demonstration test beds for building construction, maintenance and renovation. When the maintenance technician approaches a new or unfamiliar piece of equipment instead of opening several repair manuals they could put on an augmented reality display. In this display the image of the equipment would be augmented with annotations and information pertinent to the repair. The military has developed a wireless vest worn by personnel that is attached to an optical see-through display. The wireless connection allows the soldier to access repair manuals and images of the equipment. Future versions might register those images on the live scene and provide animation to show the procedures that must be performed. 7. Consumer Design: Virtual reality systems are already used for consumer design. Using perhaps more of a graphics system than virtual reality, when you go to the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like When we head into some high-tech beauty shops today you can see what a new hair style would look like on a digitized image of yourself. But with an advanced augmented reality system you would be able to see the view as you moved. If the dynamics of hair are included in the description of the virtual object you would also see the motion of your hair as your head moved. 8. Augmented mapping: Paper maps can be brought to life using hardware that adds up-to- the- minute information, photography and even video footage. Using AR technique the system, which augments an ordinary tabletop map with additional information by projecting it onto the map’s surface. can be implemented. It would help emergency workers and have developed a simulation that projects live information about flooding and other natural calamities. The system makes use of an overhead camera and image recognition software on a connected computer to identify the region from the map’s topographical features. An overhead projector then overlays relevant information - like the location of a traffic accident or even the position of a moving helicopter - onto the map.
  • 22. 22 Advantages and Limitations Advantages:  Augmented Reality is set to revolutionize the mobile user experience as did gesture and touch (multi-modal interaction) in mobile phones. This will redefine the mobile user experience for the next generation making mobile search invisible and reduce search effort for users.  Augmented Reality, like multi-modal interaction (gestural interfaces) has a long history of usability research, analysis and experimentation and therefore has a solid history as an interface technique.  Augmented Reality improves mobile usability by acting as the interface itself, requiring little interaction. Imagine turning on your phone or pressing a button where the space, people, objects around you are “sensed” by your mobile device- giving you location based or context sensitive information on the fly. Limitations:  Current performance levels (speed) on today’s iPhone or similar touch devices like the Google G1 will take a few generations to make Augmented Reality feasible as a general interface technique accessible to the general public.  Content may obscure or narrow a user’s interests or tastes. For example, knowing where McDonald’s or Starbucks is in Paris or Rome might not interest users as much as “off the beaten track information” that you might seek out in travel experiences.  Privacy control will become a bigger issue than with today’s information saturation levels. Walking up to a stranger or a group of people might reveal status, thoughts (Tweets), or other information that usually comes with an introduction, might cause unwarranted breaches of privacy.
  • 23. 23 MODIFICATIONS  Size of the head gears (hmd): In times it is very inconvenient to wear those large head gears and go out somewhere. Instead those gears could be converted into small contact lenses (eye lens). Through that lens the user can operate a computer in front of his eye all the time.  More accurate sensing: For a better experience the sensors must be accurate and well calibrated because the output experience which is provided to the user totally depends upon the action of the sensors. So it is preferred for the sensors to have minimum error percentage in data capturing.  Thin film battery: The device must be wireless for the convenience of the user and we also need battery for the power supply and that battery must be thin filmed to cut the size and weight of the devices.  Vision aid: The AR devices could be used as an aid for the people having poor vision like myopia , hypermetropia , by zooming in or out whatever the user sees via that device, as required by the user. For the colour blind users , the device can simply mark that coloured object and show the name of the colour in text form to the user.
  • 24. 24 CONCLUSION Augmented reality (AR) is far behind Virtual Environments in maturity. Several commercial vendors sell complete, turnkey Virtual Environment systems. However, no commercial vendor currently sells an HMD-based Augmented Reality system. A few monitor-based "virtual set" systems are available, but today AR systems are primarily found in academic and industrial research laboratories. The first deployed HMD-based AR systems will probably be in the application of aircraft manufacturing. Both Boeing and McDonnell Douglas are exploring this technology. The former uses optical approaches, while the letter is pursuing video approaches. Boeing has performed trial runs with workers using a prototype system but has not yet made any deployment decisions. Annotation and visualization applications in restricted, limited range environments are deployable today, although much more work needs to be done to make them cost effective and flexible. Applications in medical visualization will take longer. Prototype visualization aids have been used on an experimental basis, but the stringent registration requirements and ramifications of mistakes will postpone common usage for many years. AR will probably be used for medical training before it is commonly used in surgery. The next generation of combat aircraft will have Helmet Mounted Sights with graphics registered to targets in the environment. These displays, combined with short-range steer able missiles that can shoot at targets off-bore sight, give a tremendous combat advantage to pilots in dogfights. Instead of having to be directly behind his target in order to shoot at it, a pilot can now shoot at anything within a 60-90 degree cone of his aircraft's forward centerline. Russia and Israel currently have systems with this capability, and the U.S is expected to field the AIM-9X missile with its associated Helmet-mounted sight in 2002. Augmented Reality is a relatively new field, where most of the research efforts have occurred in the past four years. Because of the numerous challenges and unexplored avenues in this area, AR will remain a vibrant area of research for at least the next several years. After the basic problems with AR are solved, the ultimate goal will be to generate virtual objects that are so realistic that they are virtually indistinguishable from the real environment. Photorealism has been demonstrated in feature films, but accomplishing this in an interactive application will be much harder. Lighting conditions, surface reflections, and other properties must be measured automatically , in real time.
  • 25. 25 FUTURE SCOPE  AR based medical training  AR based 3D sketching kits for children  AR based 3D CAD software  AR based simulating software  AR based tracking devices  Fear conquering device  Vision aid  Could be used in driverless cars for better assistance Fig 8: Futuristic AR Interface [Ref 8]
  • 26. 26 REFERENCE 1. Augmented reality Wikipedia,http://en.wikipedia.org/wiki/Augmented_reality Augmented reality: A practical guide. (2008) 2. http://media.pragprog.com/titles/cfar/intro.pdf http://arcadia.eafit.edu.co/Publications/AugmentedRealityIADATEnglish.pdf 3. http://upcommons.upc.edu/eprints/bitstream/2117/9839/1/IEM_number16_WN.2.pdf International Conference on EngineeringEducation & Research, Korea, (2009). 4. http://robot.kut.ac.kr/papers/DeveEduVirtual.pdf Jochim, S., Augmented Reality in Modern Education 5. N. Norouzi et al., "Walking Your Virtual Dog: Analysis of Awareness and Proxemics with Simulated Support Animals in Augmented Reality," 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 2019, pp. 157-168. 6. S. Thanyadit, P. Punpongsanon and T. Pong, "ObserVAR: Visualization System for Observing Virtual Reality Users using Augmented Reality," 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 2019, pp. 258-268. 7. Designers. In E. Pearson & P. Bohman (Eds.), Proceedings of World 8. Conference on Educational Multimedia, Hypermedia 9. Dix, J., Finlay, J., Abowd, D., Beale, R.:Human-Computer Interaction. 10. Third Edition, Pearson: Prentice Hall Europe,(2004). 11. Valenzuela, D., Shrivastava, P.: Interview as a Method for Qualitative Research.Presentation 12. http://www.public.asu.edu/~kroel/www500/ Interview%20Fri.pdf Thomas, W.: A Review of ResearchonProject BasedLearning. March, 13. .http://www.bobpearlman.org/BestPractices/PBL_Research.pdf Shtereva, K., Ivanova, M., Raykov, P.: Project BasedLearning 14. In Microelectronics: Utilizing ICAPP. Interactive Computer Aided Learning
  • 27. 27 15. K. Kim, M. Billinghurst, G. Bruder, H. B. Duh and G. F. Welch, "Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008– 2017)," in IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 11, pp. 2947-2962, Nov. 2018. 16. J. Alves, B. Marques, M. Oliveira, T. Araújo, P. Dias and B. S. Santos, "Comparing Spatial and Mobile Augmented Reality for Guiding Assembling Procedures with Task Validation," 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Porto, Portugal, 2019, pp. 1-6. 17. M. G. de S Ribeiro, I. L. Mazuecos, F. Marinho and A. N. G. dos Santos, "Agile Explorations in AR," IECON 2019 - 45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 2019, pp. 2902-2909. 18. A. Erickson, K. Kim, R. Schubert, G. Bruder and G. Welch, "Is It Cold in Here or Is It Just Me? Analysis of Augmented Reality Temperature Visualization for ComputerMediated Thermoception," 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 2019, pp. 202-211. 19. A. Erickson, K. Kim, R. Schubert, G. Bruder and G. Welch, "Is It Cold in Here or Is It Just Me? Analysis of Augmented Reality Temperature Visualization for ComputerMediated Thermoception," 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 2019, pp. 202-211. 20. L. Qian, J. Y. Wu, S. P. DiMaio, N. Navab and P. Kazanzides, "A Review of Augmented Reality in Robotic-Assisted Surgery," in IEEE Transactions on Medical Robotics and Bionics, vol. 2, no. 1, pp. 1-16, Feb. 2020. 1) Fig 1 - https://www.digitalartsonline.co.uk [Ref 1] 2) Fig 2 - https://whatis.techtarget.com [Ref 2] 3) Fig 3 - https://www.bustle.com [Ref 3] 4) Fig 4 - https://www.oberlo.in [Ref 4] 5) Fig 5 - https://researchgate.net [Ref 5] 6) Fig 6 - https://www.techrepublic.com [Ref 6] 7) Fig 7 - https://www.re-flekt.com [Ref 7] 8) Fig 8 - https://www.wired.com [Ref 8]