SlideShare a Scribd company logo
1 of 37
Download to read offline
MAJOR PROJECT
MID-SEM EVALUATION REPORT
Topic:INTERIOR DESIGN ANDROID APP BASED ON
AUGMENTED REALITY
Panel Members: Submitted By:
Dr. ChetnaDabasShyamGupta 10103575
Mr. MahendraGurue Vinyas Gupta 10103676
Submitted to:
Dr.Dharamveer Singh
DECLARATION
We hereby declare that this submission is our own work and that, to the
best of my knowledge and belief, it contains no material previously
published or written by another person nor material which has been
accepted for the award of any other degree or diploma of the university
or other institute of higher learning, except where due acknowledgment
has been made in the text.
NAME SIGNATURE:
ShyamGupta(10103575)
VinyasGupta(10103636)
CERTIFICATE
This is to certify that the work titled “_____________ App” submitted by
Shyan Gupta and Vinyas Gupta of Jaypee Institute of Information
Technology University, Noida has been carried out under my
supervision. This work has not been submitted partially or wholly to any
other University or Institute for the award of this or any other degree or
diploma.
Signature of Supervisor …..……………………..
Name of Supervisor Dr.Dharamveer Singh
Date
ACKNOWLEDGEMENT
We would like to take this opportunity to thank our major mentor Dr.
Dharamveer Singh for his valuable guidance and encouragement
throughout the project.
He has helped us throughout the project development phase. He has
supervised and guided us to complete the project successfully. He has
been a great support in solving our difficulties that we faced and in
improving the project.
1.Introduction
Augmented reality is a live, direct or indirect, view of a physical, real-world environment
whose elements areaugmented by computer-generated sensory input such as sound, video,
graphics or GPS data . Augmented Reality is a type of virtual reality that aims to duplicate
the world's environment in a computer. Virtual reality (VR) is a virtual space in which
players immerse themselves into that space and exceed the bounds of physical reality. It
adds information and meaning to a real object or place. Augmented reality is characterized
by the incorporation of artificial or virtual elements into the physical world as shown by the
live feed of the camera, in real-time. Common types of augmented reality include
projection, recognition, location and outline.
Projection: It is the most common type of augmented reality, projection uses virtual
imagery to augment what you see live. Some mobile devices can track movements and
sounds with a camera and then respond. Virtual or projection keyboards, which one can
project onto to almost any flat surface and use, are examples of augmented reality devices
that use interactive projection.
Recognition: Recognition is a type of augmented reality that uses the recognition of shapes,
faces or other real world items to provide supplementary virtual information to the user in
real-time. A handheld device such as a smart phone with the proper software could use
recognition to read product bar codes and provide relevant information such as reviews
and prices or to read faces and then provide links to a person's social networking profiles.
Location: Uses GPS technology to instantaneously provide you with relevant directional
information. For example, one can use a smart phone with GPS to determine his location,
and then have onscreen arrows superimposed over a live image of what's in front of the
user and point him in the direction of where you need to go. This technology can also be
used to locate nearby public transportation stations.
Outline: Outline is a type of augmented reality that merges the outline of the human body
or a part of the body with virtual materials, allowing the user to pick up and otherwise
manipulate objects that do not exist in reality. One example of this can be found at some
museums and science centers in the form of virtual volleyball. Although the player can stand
and move on an actual court, the ball is projected on a wall behind him, and he can control
it with an outline of himself, which is also projected on the wall Using the concept of
augmented reality our project focuses on creating a very useful android mobile based
application. The idea is to allow the user to view the virtual object in the real world. The user
could provide images of the object which would be the front, back, top, bottom, and left
and right side pictures of the object. They will be placed onto a 3D cube which will make
up the complete virtual object. Thus an extended environment will be created through the
amalgamation of real world and generated object and it will appear as though the real-world
object and virtual object coexist within the environment. In order to use this application, the
user will first need to acquire a marker. A marker is a piece of paper with black and white
markings. This is used to display the augmented object on your mobile phone‘s screen.
Marker-based augmented reality uses a camera and a visual marker which determines the
centre, orientation, and range of its spherical coordinate system. Once the marker is
present one can view augmented objects. Virtual object interaction is another added
feature wherein the user can rotate or change the orientation of the object according to his
requirements.
VIRTUAL REALITY
The term "artificial reality", coined by Myron Krueger, has been in use since the 1970s;
however, the origin of the term "virtual reality" can be traced back to the French
playwright, poet, actor, and director Antonin Artaud. Virtual Reality is a computerized
simulation of natural or imaginary reality. Often the user of VR is fully or partially
immersed in the environment. Full immersion refers to someone using a machine to shield
herself from the real world. VE is the term used to describe the scene created by any
computer program in which the user plays an interactive role within the context of the
computer generated three dimensional world. The user represents an actor within the system
and has an essential presence within the virtual world. Virtual reality (VR) is a term
that applies to computer-simulated environments that can simulate physical presence in
places in the real world, as well as in imaginary worlds.
Advantages:
Many different fields can use VR as a way to train students without actually putting
anyone in harm's way. This includes the fields of medicine, law enforcement, architecture
and aviation. VR also helps those that can't get out of the house experience a much fuller life.
Doctors are using VR to help reteach muscle movement such as walking and grabbing as
well as smaller physical movements such as pointing. The doctors use the malleable
computerized environments to increase or decrease the motion needed to grab or move an
object. This also helps record exactly how quickly a patient is learning and recovering.
Total immersion within environment.
Increased user presence perception within system.
Facilitated production of an entirely designed environment.
Existing technologies for advanced interaction.
Real-time graphical environment generation possible.
Disadvantages:
The hardware needed to create a fully immersed VR experience is still cost prohibitive. The
technology for such an experience is still new and experimental. VR is becoming much more
commonplace but programmers are still grappling with how to interact with virtual
environments. The idea of escapism is common place among those that use VR
environments and people often live in the virtual world instead of dealing with the real one.
One worry is that as VR environments become much higher quality and immersive, they
will become attractive to those wishing to escape real life. Another concern is VR
training. Training with a VR environment does not have the same consequences as training
and working in the real world. This means that even if someone does well with simulated
tasks in a VR environment, that person might not do well in the real world.
Not suited to real-world interaction.
Despite advances in technology equipment is still expensive.
Auxiliary senses not stimulated.
Possible lag in motion-display system.
III. AUGMENTED REALITY
Augmented Reality (AR), also known as Mixed Reality, aims to combine virtual and real
scene together to achieve that virtual ones are belong to the real world. Being characteristic
of integration of virtual and real scene, many applications of Augmented Reality are
emerging, such as in field of education, medical treatment and entertainment.
Fig 1: Real desk with virtual lamp and two virtual
chairs [5].
Figure 1 shows an example of what this might look like. It shows a real desk with a real phone. Inside this
room there are also a virtual lamp and two virtual chairs. Note that the objects are combined in 3-D, so that
the virtual lamp covers the real table, and the real table covers parts of the two virtual chairs. AR can be
thought of as the "middle ground" between VE (completely synthetic) and telepresence (completely real).
Goals of Augmented Reality:
To challenge the impossible.
To create virtual environment for a more rich user experience.
To integrate it into daily lives to help the masses.
To achieve feats which are limited in real world.
To enhance imagination of youths.
Types of Augmented reality:
There are two types of simple augmented reality: marker-based which uses cameras and
visual cues, and marker less which use positional data such as a mobile's GPS and
compass. (Johnson et al, 2010)
Marker based
Different types of Augmented Reality (AR) markers are images that can be detected by a
camera and used with software
as the location for virtual assets placed in a scene. Most are black and white, though
colours can be used as long as the contrast between them can be properly recognized by a
camera. Simple augmented reality markers can consist of one or more basic shapes made
up of black squares against a white background. More elaborate markers can be created
using simple images that are still read properly by a camera, and these codes can even take
the form of tattoos.
Fig 2: A Simple Marker
A camera is used with AR software to detect augmented reality markers as the location for
virtual objects. The result is that an image can be viewed, even live, on a screen and
digital assets are placed into the scene at the location of the markers. Limitations on the
types of augmented reality markers that can be used are based on the software that
recognizes them. While they need to remain fairly simple for error correction, they can
include a wide range of different images. The simplest types of augmented reality markers
are black and white images that consist of two-dimensional (2D) barcodes.
Marker less
In marker-less augmented reality the image is gathered through internet and displayed on
any specific location (can be gathered using GPS). The application doesn‘t require a marker
to display the content. It is more interactive than marker based augmentation.
Fig 3:Marker Less AR
The only real difference from a consumer‘s perspective is that the surface the object is
sculpted on doesn‘t have to have that.
IV.MARKER DESIGN , DETECTION AND RECOGNITION
METHOD
Markers are square and constituting of black thick border and black graphics within its
white internal region. The advantage of using black and white colour is to separate
marker from background in grabbed frame easily. Internal region of a marker marks
identifier of it. In term of projective geometry, square markers in real world could not be
square after projecting onto image plane, in other words, internal graphics in markers
often display in distortion. When recognizing them, image unwrapping is necessary.[4]
The procedure of unwrapping image is shown in Fig.4.
Fig 4:Procedure of unwrapping marker image to find ID
The calculation of marker unwrapping could be described as follows: as
four corners of a marker are acquired after detecting grabbed frame. Positions of four
corners are known in the real world as . Homography matrix H could be
calculated in (1). By H points in internal region of marker could be unwrapped to formal
one.[4]
After that, unwrapping image are used to match templates in matching method or decode in
code-decoding method respectively.
V. PROBLEM
STATEMENT
The proposed system aims to provide an environment that will help the users to place
artificial 2D as well as 3D objects into real world through the use of AR Markers. The
proposed system also allows the user to decide, where to place the object in real world.
Once the object has been placed in the scene, it will be displayed accurately according to
the perspective in the original scene, which is especially challenging in the case of 3D
virtual objects. The proposed system solves the problem of viewpoint tracking and
virtual object interaction. The main advantage of the proposed system is that, it is
customer oriented and not product or service oriented thus allowing the users to augment
a product of their wish. There would be option to move the Virtual object in the virtual
space with a marker.
VI.WHY ANDROID OS
Since the advent of 2010 more and more stress has been given on the usage of Free and
Open Source Software (FOSS).
Android is leading the current O.S market as shown in figure 5, because it is open source
and developed by a consortium of more than 86 leading M.N.C‘s called Open Handset
Allowance (O.H.A). Android also is stated as one the most rapidly growing technologies.
More and more applications have been developed and modified by third party user.
Fig 5: Smartphone OS
market shares
Moreover, the Android O.S is user friendly. It has a great performance and processing power. Thus, the
proposed system is being developed for the most rapidly emerging and flexible O.S- ―ANDROID‖
VII. PROPOSED
SYSTEM ARCHITECTURE
Fig 6: Architecture Block
diagram
An AR application is composed of the following core components:
Camera
The camera component ensures that every preview frame is captured and passed efficiently to the
tracker. The developer only has to initialize the camera to start and stop capturing. The camera frame is
automatically delivered in a device-dependent image format and size.
Image Converter
The pixel format converter converts from the camera format (e.g., YUV12) to a format suitable for
OpenGL ES rendering (e.g., RGB565) and for tracking (e.g., luminance) internally. This conversion also
includes downsampling to have the camera image in different resolutions available in the converted frame
stack.
Tracker
The tracker component contains the computer vision algorithms that detect and track real-world objects in
camera video frames. Based on the camera image, different algorithms take care of detecting new targets
or markers and evaluating virtual buttons. The results are stored in a state object that is used by the video
background renderer and can be accessed from application code. The tracker can load multiple datasets
at the same time and activate them.
Video Background Renderer
The video background renderer module renders the camera image stored in the state object. The
performance of the background video rendering is optimized for specific devices.
Application Code
Application developers must initialize all the above components and perform three key steps in the
application code. For each processed frame, the state object is updated and the applications render
method is called. The application developer must:
1. Query the state object for newly detected targets, markers or updated states of these elements
2. Update the application logic with the new input data
3. Render the augmented graphics overlay
Device Databases
Device databases are created using the online Target Manager. The downloaded device target database
assets contain an XML configuration file that allows the developer to configure certain trackable features
and a binary file that contains the trackable database. These assets are compiled by the application
developer into the app installer package and used at runtime by the Vuforia SDK.
Cloud Databases
Cloud databases can be created using the Target Manager or using the Vuforia Web Services API.
Targets are queried at runtime of the application using the cloud recognition feature that performs a visual
search in the cloud using sent camera images. In addition to the target data, the provisioned targets can
contain metadata which is returned upon query.
User-Defined Targets
A fundamentally different supported approach is the user-defined targets feature. Rather than preparing
targets outside of the developer, this feature allows for creating targets on-the-fly from the current camera
image. A builder component is called to trigger the creation of a new user-target. The returned target is
cached, but retained only for a given AR session.
PROJECT CONSTRAINTS
Augmented reality still has some challenges to overcome. Augmented Reality systems are expected to run in real-
time so that a user will be able to move freely within the scene and see a properly rendered augmented image.
The application
will be built for mobile phones which usually have low screen dimensions and resolution. It also adds additional
stress on the O.S because it requires high processing power to augment. Developers of the application are
supposed to have a thorough knowledge of Android O.S (Applications developed for) and Windows O.S
(Application developed in). Developers are also supposed to be familiar with ADK (Android Development Kit)
and Eclipse.
APPLICATION
AREAS
A picture is worth thousand words. The applications of this project are well understood from below snaps
which
shows virtual objects in real world
environment.
Medical Science [11] Fashion
Gaming [13] Product Information [14]
Literature Survey
2.1 List all the sources for formulation of problem statement
Books:
Lester Madden
Professional Augmented Reality Browsers for Smartphones: Programming for junaio, Layar
and Wikitude (Wrox Programmer to Programmer)
Publication Date: June 7, 2011 | ISBN-10: 1119992818 | ISBN-13: 978-1119992813 | Edition: 1
Tools:
● Unity3d Pro
● Qualcomm Vuforia SDK
● Android Development Tools
● Eclipse
● Samsung GT-N7100
2.2Summary of relevant papers
1. Title of paper : Interior Design in Augmented Reality Environment
Authors Viet ToanPhan
Year of Publication: 2010
Summary:
This paper proposes a marker based augmented reality application using Android
operating system which will help to combine virtual objects with the real environment
facilitating various applications as mentioned in this paper. The main advantage is use of low
cost devices as compared to the costly head mounted display devices. Secondly with the help of
this project you need not buy product and then see how it will suit your environment. In future
images of objects from various views can be fetched directly from vendor‘s websites; same could
be modelled into a 3D objects and augmented. Also multiple objects will be augmented which is
currently a major challenge
Web link: http://www.ijcaonline.org/volume5/number5/pxc3871290.pdf
2. Title of paper: A Study on Tangible AR for interior design
Authors: Viet ToanPhan and Prof. SeungYeonChoo
Year of Publication: 2010
Summary: This research examined virtual furniture and adjustment work to create a new
design method using Augmented Reality technology for Interior Design education.
In particular, AR technology opens up many new research fields in engineering and
architecture. In an AR environment, design work can become more lively, convenient,
and intelligent. Plus, design work and manufacturing can be conducted at the same time
and I
close relationship with each other. With AR, the virtual products of graphic technology
are not only for simulation but also obtain real higher values.
Furthermore, ARtechnology can become a new animated simulation tool for interior
design, allowing the user to see a mixed AR scene through HMD, video display, or PDA.
It is also anticipated that the interactive potential can be increased according to the user‘s
needs.
Web link: http://www.cse.iitb.ac.in/~pb/cs626-449-2009/prev-years-other-things-
nlp/sentiment-analysis-opinion-mining-pang-lee-omsa-published.pdf.
3. Title of paper:Handheld Augmented Reality in Civil Engineering
Authors: Gerhard Schall
Year of Publication: 2009
Publishing details: Comprehensive exam paper
Summary: An overview of current developments in handheld Augmented Reality in civil
engineering was given. Furthermore, a location- and context-aware handheld AR system
was presented, which addresses the workflow optimisation of common field tasks with
utilities. By means of a more intuitive way of information conveying remarkable time
saving can be achieved employing such a system. Potential fields of application were
outlined. A first fully functional prototype of our handheld AR system Vidente is
available and provides a set of tools for direct user interaction with the presented
information on buried utility assets. Further improvements will focus on advanced global
tracking using DGPS / RTK-GPS employing terrestrial correction data services while
keeping an ergonomically acceptable form factor.
Web link: http://www.icg.tu-graz.ac.at/Members/schall/rosus/download
4. Title of paper:Marker Based Augmented Reality Using Android OS
Authors: Mr. Raviraj S. Patkar, Mr. S. PratapSingh,Ms. Swati V. Birje
Year of Publication:2013
Summary:Augmented Reality or AR is an emerging technology in which one‘s
perception of the real-time environment is enhanced by superimposing computer-
generated information such as graphical, textual, or audio content, as well as objects onto
a display screen. The proposed application is an android mobile based application which
will be compatible with all the existing and upcoming versions of the operating system.
The idea is to allow the user to view the virtual object in the real world using a marker
based AR system. The user could provide images of the object which would be the front,
back, top, bottom, left and right side pictures of the object. They will be placed onto a
3D cube which will make up the complete virtual object. Thus an extended environment
will be created through the amalgamation of real world and generated object and it will
appear as though the real-world object and virtual object coexist within the environment.
The advantages of this application as compared to the already existing 2D application are
that it would display object in 3D and enable the user to rotate it virtually. It is
inexpensive as the user nee d not actually purchase the object to see how it fits in the
environment, instead he can try before the purchase itself.
.Web link:http://cs.stanford.edu/courses/cs224n/2009/fp/16.pdf
2.3 Summary of field survey
The Interior Design Domain
The world of virtual objects in an AR application incorporates the information that makes
sense at the application level and is relevant to the users. Conceptually, this is a shared data
base of logical objects, the ―model‖. Because of the replicated architecture, the model is
available as a copy in each instance of the application. The structure and type of the model is
dependent on the application. In the interior design example we deal with a model that stores
geometric data for a set of furniture. Each model object represents a piece of furniture, and
maintains geometric transformations and visual attributes relevant to the object. The model
objects are organized hierarchically, so that it is possible to select and interact with groups of
furniture.
An interactive representation of the model in the user interface is called a ―view‖. Views are
created based on a specific interpretation of the model information. The interpretation is
determined by the type of view, the type of model data, and the context of the interface. As an
example, consider the furniture model of the interior design application and two views, a
graphics rendering of the furniture and a browser for the items in the model. The rendering
creates geometric primitives appropriate for the type of furniture, and geometric
transformation and attributes are used directly to customize the presentation. On the other
hand the browser creates a list of labels taken from the model objects and ignores all
geometric information. Both views have presentation parameters that are not bound by the
model interpretation and can be used for local customization of the interface.
The interpretation is not so straightforward for non-geometric model information.
Highlighting a view object, for example, can be used to indicate a current selection in the
model, but different views are likely to use different methods to show the highlight. This type
of feedback becomes more complicated in multi-user applications with local selections at each
site. In a distributed application, information about the state and actions of remote users
becomes part of the model. It is important to give each user an awareness of who is
participating and what other participants are doing. The interior design application makes use
of an object browser that is capable of showing remote selections. In general, it can be a
challenging task for the interface builder to find a concrete visualization for some abstract
structure or behavior in the model.
The model-view mechanism is well known in the area of user interface construction, and used
to implement a separation between interface and application functionality. The importance of
separability for modularity and independent development of interface components has been
recognized even in non-distributed environments, where it is considered good software
design. Yet separability becomes an indispensable architectural feature for distributed
interfaces, at least for those with a need for object-level sharing. By using the model-view
paradigm, we achieve the necessary independence between the conceptual objects of the
global model and the objects and manipulations of a particular interactive view.
New tools:
AR Tools:
1. Qualcomm Vuforia SDK: A very sophisticated SDK for handelling AR based apps with
support for various objects.
2. Unity3d Pro :Used instead of OpenGL renderer,to render graphics and scenes to the
device.
3. ARCamera Prefab: Replacing the main camera can ensure that we can use the device
camera as an input device.
4. Node.js: For networking and connections management to our cloud database.
5. Open Source Cloud Service like Hadoop, CloudFoundry: for storing image targets
database.
List some relevant current/open problems.
Environment: How can we deal with dynamic aspects (color, illumination) of environments?
While (indirectly) some work has been performed on visual patterns, in general the structure,
colors, and illumination conditions in an environment are ignored or adapted for manually. For
example, dynamically adaptable color schemes that adjust to the environment conditions could
be of great benefit to solve some of the object segmentation and depth problems that are caused
by the environment.
Capturing: How do high-definition and HDR cameras coupled with improved display resolution
change perception on small devices? These camera types are currently attracting interest: they
are suitable for solving perceptual problems associated with reso- lution mismatches, and the
improvement of the color gamut and contrast. However, the perceptual consequences of using
HDR cameras with non-HDR displays should be carefully studied, since skewed colors can be
counterproductive.
Capturing: How can we design systems with dynamic FOV, and what effects do they have? The
FOV mismatch introduced by using wide-angle lenses with small FOV displays causes scene
distortion. This could be addressed through dynamic FOV (e.g., by using liquid lens technology).
Similarly, (software) methods that adapt to the actual position of the eye relative to the display
could prove useful. It is unknown, though, if such methods are achievable and if they will cause
perceptual disturbances.
Augmentation: How can we further improve AR methods to minimize depth-ordering
problems? X-ray vision is useful to look through objects in the real scene. However, depth
ordering and scene understanding in such systems still requires improvement: one direction that
may yield benefits is multi-view perception. Similarly, label placement in highly cluttered
environments still suffers from depth ordering problems. Layout and design can also be
improved—apt associations need to be implemented that uni- quely bind a label to an object.
Cues that specify potentially dis-ambiguating information related to the real world (e.g., a street
address) might be one possibility in cluttered city environments.
Display: Can we parameterize video and rendering quality to pixel density, to support
―perceptually correct‖ AR? In particular, improvements in camera capturing quality and pixel
density will make it possible to use very high resolution imagery on very small screens, but, to
what extent do we need to change the image‘s visual representation to maximize its
understandability? Additionally, what is the maximum disparity between video and rendering
resolution before noticeable perceptual problems arise? And, is it possible to parameterize the
offset effects between video and rendering, for example with respect to mismatches or abstrac-
tions? Finally, how much rendering fidelity is truly needed? For example, depth does not seem to
be affected much by fidelity (see Section 4.3, rendering and resolution mismatch).
Display: What is the weighting of perceptual issues among dif- ferent display devices? One of
the most pressing questions is the actual effect each problem has on the various display types:
com- parative evaluations are required to generate a per-device weight- ing of perceptual
problems, which would be particularly useful for determining those problems that should be
tackled first. In the next section, we provide an initial overview of the differences between the
various platforms.
User: What are the effects of the dual-view situation on percep- tion and cognition in AR
systems? In particular, handheld and see- through devices introduce a dual view situation, which
may help to verify ambiguous cues obtained from display content. Howev- er, its true effects are
unknown; for example, disparity plane switching is expected to be counterproductive, but are the
advan- tages of dual-view more important, and, how could we possibly minimize the effects of
disparity plane switching?
User: What are the effects of combinations of these problems on the perceptual pipeline? A
single problem can have effects on different stages, as evidenced by our repeated mentions of
some issues in multiple sections; for example, sunlight can make captur- ing, display, and user
perception difficult. What may be even more important is the actual combination of problems
that accumulate through the pipeline: for instance, low-resolution capturing may affect multiple
subsequent stages in the perceptual pipeline, and problems may become worse at each stage. The
question is how much the accumulation affects perceptual problems on dif- ferent platforms.
3.5 Task division among group members
Task Done by
Finding and reading papers and books Vinyas and Shyam
Node.js Shyam
Cloud Services and their usage Shyam and Vinyas
Unity3d Shyam and Vinyas
Vuforia SDK Shyam and Vinyas
3DS MAX Shyam
GUI Vinyas
Accuracy and other measurements Vinyas
4. Analysis, Design and Modeling
4.1 Overall description of the project
In the case of interior design, the designer essentially applies the three basic principles of interior
design: color, scale, and proportion within a predetermined space. Thus, the proposed AR system
is focused on giving the user the flexibility to design using these three basic principles.
Therefore, in the proposed AR environment, the user is able to adjust the properties of virtual
furniture and create different arrangements in a real environment.
The era of mobile web application has just started, and
there is a long way for it to march. Development of mobile web application will be emphasized
on following aspects:
1) More and more sensors will be added to mobile phones, so new APIs to use those capabilities
will bring brand new applications to users.
2) Multimedia capabilities will be enhanced and engine will support more types of multimedia
such as flash and svg .
3) The dedicated Integrated Development Environment (IDE) will be improved to accelerate the
applications‘ development. Visualization programming and JavaScript debugging will be the
most important functions of the IDE.
4.2 Functional requirements and Non Functional requirements4.2.1
Functional requirements
Functional requirements define the fundamental actions that must take place in the software in
accepting and processing the inputs and in processing and generating the outputs.
Non - Functional requirements
4.2.2.1 Error handling
● product shall handle expected and non-expected errors in ways that prevent loss in
information and long downtime period.
4.2.2.2 Performance Requirements
● User should be able to enter a multiple line review.
● User must be able to try the system for more than one review in one run.
4.2.2.3 Safety Requirements
● System use shall not cause any harm to human users.
4.2.2.4 Security Requirements
● System will use secured database .
● Any user can just read information but they cannot edit or modify anything except the
information he/she enters..
4.2.2.5 Reliability
The system should insure that the user actions are performed correctly as the user requires:
● System should be as accurate as possible and easy to use.
● System must not give an output for empty input, the system should be able to recover and
react robustly with the error, by sending a message back to the user. .
4.2.2.6 Correctness:-
All algorithms implemented in the system should be correct which means that they should be
performed as required. The testing Phase insures correctness of the software by trying all
possible Case and matching their output with the documentation.
Dependency Details
Software Requirements:
Component Version
JDK Java SE 7u25
Eclipse IDE Latest version
Android SDK Downloader Android SDK Tools revision 22
Android ADT Latest version that is for SDK tools rev 22
Android SDK Tools Android SDK Tools revision 22
Android SDK platform support Android SDK Platform tools revision 17
Cygwin Environment Latest version 1.7.20-1
Android NDK Android NDK r8e
Microsoft XBOX KINECT v2.18
Hardware Requirements:
The only hardware is computer systems which will act as server, data centre, front end for the
user
● PC 2.0Ghz or higher.
● 3GB Ram or higher.
● 10 GB disc Space or higher.
● Internet Access.
● Operating System – Win Vista or higher
● Depth sensing camera like Kinect or PrimaSense.
● An android device with a good camera,preferably with Auto focus.
Testing
Testing Plan
A. Plan
Table 1: Testing Plan
Type of Test Test
Performed (Yes/No)
Comments Software Component
Requirements
Testing
Yes Modules and the
integrated system were
tested to ensure it
meets all the
requirements.
1. Initialization
2. Camera focus
3. Marker detection
4. Object tracking
Unit Testing Yes Modules were
developed as a
1. Single marker
implementation
separate unit, and
tested so as to avoid
errors migrating in the
system after
Integration.
2. Initial Test
3. Frame Marker
Implementation
Integration Testing Yes Developing all
modules separately
makes integration
takes testing an
essential requirement.
1. ImageTarget
2. UserDefined
Target
3. Cloud Reco
4. Touch Events
Performance Testing No At current stage, focus
was on meeting the
requirements and
feature development.
GUI Testing Yes Being a Web
Application, the GUI
is almost as much
important as any other
feature development.
1. UserDefined
Target Menu
Security Testing No At current stage
feature development is
given more preference.
Stress Testing No At current stage, focus
was on meeting the
requirements and
feature development.
Load Testing No At current stage, focus
was on meeting the
requirements and
feature development.
B. Team Details
Table 2: Test Team Details
Role Name Specific Responsibilities
Test Case Design
and
Development
Vinyas & Shyam
Planning and executing various test cases for
Functional and Non- Functional Points of the Web
Site.
Identifying the appropriate techniques and guidelines
to implement the required tests
Testing Vinyas & Shyam
Responsible for conducting unit, integration and GUI
testing for the website.
Debugging Shyam Finding and Correcting Errors found after running the
test cases.
C. Test Schedule
Table 3: Test Schedule
Activity Start Date Completion
Date
Hour Comments
ImageTarget Apr 19, 2014 Apr 19, 2014 1 hrs. Successfully
Done
Camera
Integration
Apr 19, 2014 Apr 19, 2014 1 hrs. Successfully
Done
Initial User Test Apr 19, 2014 Apr 19, 2014 3 hrs. Successfully
Done
Test User Defined
Module
Apr 20, 2014 Apr 20, 2014 3 hrs. Successfully
Done
Video Playback Apr 20, 2014 Apr 20, 2014 2 hrs. Successfully
Done
Content–
Selection
Apr 20, 2014 Apr 20, 2014 1 hrs. Successfully
Done
Content
Sequencing
Apr 20, 2014 Apr 20,2014 2 hrs. Successfully
Done
Touch Drag and
Drop
Apr 21, 2014 Apr 21, 2014 4 hrs. Successfully
Done
Model Selection
and Rendering
Apr 21, 2014 Apr 21, 2014 3 hrs. Successfully
Done
VS Movement Apr 21, 2014 Apr 21, 2014 2 hrs. Successfully
Done
● 4.5 Risk Analysis and Mitigation Plan
Ris
k
ID
Description of
Risk
Risk Area Prob.
( P)
Impact
(I)
RE
(P*I)
Risk
Selected
for
mitigatio
n(S)
(Y/N)
Mitigation
plan if S is
‗Y‘
Contingency
plan if any
1 Marker not
visible
Requirem
ent Risk
M(3) H(5) 15 Y Adjust
Camera of the
device to
accommodate
the marker
2 Same marker
multiple
instances
Project
Scope
L(1) H (5) 5 Y Remove one
of the marker
as only one
marker can be
associated
with a model
3 Flash on camera Hardware
Rsk
M(3) H (5) 15 Y Flash should
be kept off, as
it can create
problems in
detection of
marker
4 Occulancy
Management
Project
Scope
L(1) M
(3)
3 N Occulancy is
managed,can
place the
camera input a
bit higher to
enable focus
on target
5 Unity WEB
player not
initialized
Software
risk
M(3) H (5) 15 Y The android
devices
should have
Unity web
player
installed.
.
6 App does not
run/compatibility
issues
Requireme
nt Risk
L(1) M(3) 3 N In this
situation the
softwares that
are required to
run the
program will
be checked
and reinstalled
(if any) to
successfully
run the
program.
7 Software
packages
integration →
part of code
doesn‘t run
Developm
ent
Environme
nt Risk
H(5) H(5) 25 Y Implemented
each part
separately
and
successfully
,and after
combining
thoroughly
tested the
system.
8 Lack of
knowledge on
project domain
→ improper
implementation
and testing
Testing
environme
nt risk
and
Personnel
Related
H(5) H(5) did adequate
amount of
literature
survey ,
research and
discussion
with faculty
to minimize
the risk.
9 Kinect not
initialized with
Unity
Hardware
Risk
H(5) M(3) 15 Y Kinect plugin
for Unity
should be
initialized
before
compiling the
package
10 User Defined
Target Builder
Testing
environme
nt risk
H(5) H(5) 25 N The selected
target should
have a proper
pattern so as
to get the
proper
trackable
image.
References
[1] Lester Madden Professional Augmented Reality Browsers for Smartphones: Programming for
junaio, Layar and Wikitude (Wrox Programmer to Programmer) Publication Date: June 7, 2011 |
ISBN-10: 1119992818 | ISBN-13: 978-1119992813
[2] Dominic Cushnan Developing AR Games for Android and iOS Publication Date :September
2013|ISBN-10:1783280034
[3] Interior Design in Augmented Reality Environment ,Viet Toan Phan| Year of Publication: 2010
|Web link: http://www.ijcaonline.org/volume5/number5/pxc3871290.pdf
[4] A Study on Tangible AR for interior design ,Viet Toan Phan and Prof. Seung Yeon Choo| Year
of Publication: 2010 |Weblink:http://www.cse.iitb.ac.in/~pb/cs626-449-2009/prev-years-other-
things-nlp/ sentiment-analysis-opinion-mining-pang-lee-omsa-published.pdf
[5] Handheld Augmented Reality in Civil Engineering ,Gerhard Schall |Year of Publication: 2009|
Web link: http://www.icg.tu-graz.ac.at/Members/schall/rosus/download
[6] Marker Based Augmented Reality Using Android OS ,Mr. Raviraj S. Patkar, Mr. S. Pratap
Singh,Ms. Swati V. Birje Year of Publication: 2013|Web link:
ttp://nlp.stanford.edu/courses/cs224n/2009/fp/16.pdf
[7 ] https://developer.vuforia.com/resources/dev-guide/knowledge-base-articles
[8 ] General-purpose systems for effective construction simulation -JC Martinez, PG Ioannou - Journal
of construction engineering

More Related Content

What's hot

IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...ijcsit
 
Virtual reality 611 ims_ noida
Virtual reality 611 ims_ noidaVirtual reality 611 ims_ noida
Virtual reality 611 ims_ noidaKool Hunk
 
Sixth sense technolgy
Sixth sense technolgySixth sense technolgy
Sixth sense technolgyAnvesh Ranga
 
28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-pptvignan university
 
IRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and ApplicationsIRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and ApplicationsIRJET Journal
 
28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt3265mn
 
Sixth sense technology 04
Sixth  sense technology 04Sixth  sense technology 04
Sixth sense technology 04akki_hearts
 
426 Lecture 9: Research Directions in AR
426 Lecture 9: Research Directions in AR426 Lecture 9: Research Directions in AR
426 Lecture 9: Research Directions in ARMark Billinghurst
 
2013 426 Lecture 2: Augmented Reality Technology
2013 426 Lecture 2:  Augmented Reality Technology2013 426 Lecture 2:  Augmented Reality Technology
2013 426 Lecture 2: Augmented Reality TechnologyMark Billinghurst
 
Gesture recognition
Gesture recognitionGesture recognition
Gesture recognitionMariya Khan
 
Natural Interfaces for Augmented Reality
Natural Interfaces for Augmented RealityNatural Interfaces for Augmented Reality
Natural Interfaces for Augmented RealityMark Billinghurst
 
[AR] EncountAR Final Report - demo link in descriptions
[AR] EncountAR Final Report - demo link in descriptions[AR] EncountAR Final Report - demo link in descriptions
[AR] EncountAR Final Report - demo link in descriptionsTony Zhou
 
Real Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 SensorReal Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 Sensoriosrjce
 

What's hot (18)

IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
 
Virtual reality 611 ims_ noida
Virtual reality 611 ims_ noidaVirtual reality 611 ims_ noida
Virtual reality 611 ims_ noida
 
Sixth sense
Sixth senseSixth sense
Sixth sense
 
Sixth sense technolgy
Sixth sense technolgySixth sense technolgy
Sixth sense technolgy
 
28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt
 
IRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and ApplicationsIRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and Applications
 
28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt28704893 sixth-sense-final-ppt
28704893 sixth-sense-final-ppt
 
Sixth sense technology 04
Sixth  sense technology 04Sixth  sense technology 04
Sixth sense technology 04
 
426 Lecture 9: Research Directions in AR
426 Lecture 9: Research Directions in AR426 Lecture 9: Research Directions in AR
426 Lecture 9: Research Directions in AR
 
Sixth sense technology
Sixth sense technologySixth sense technology
Sixth sense technology
 
3D in Android
3D in Android3D in Android
3D in Android
 
2013 426 Lecture 2: Augmented Reality Technology
2013 426 Lecture 2:  Augmented Reality Technology2013 426 Lecture 2:  Augmented Reality Technology
2013 426 Lecture 2: Augmented Reality Technology
 
Gesture recognition
Gesture recognitionGesture recognition
Gesture recognition
 
Unit v
Unit vUnit v
Unit v
 
Natural Interfaces for Augmented Reality
Natural Interfaces for Augmented RealityNatural Interfaces for Augmented Reality
Natural Interfaces for Augmented Reality
 
[AR] EncountAR Final Report - demo link in descriptions
[AR] EncountAR Final Report - demo link in descriptions[AR] EncountAR Final Report - demo link in descriptions
[AR] EncountAR Final Report - demo link in descriptions
 
ABSTRACT
ABSTRACTABSTRACT
ABSTRACT
 
Real Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 SensorReal Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 Sensor
 

Similar to Major unity mid sem 8th (1)

augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentationVaibhav Mehta
 
virtual reality Barkha manral seminar on augmented reality.ppt
virtual reality Barkha manral seminar on augmented reality.pptvirtual reality Barkha manral seminar on augmented reality.ppt
virtual reality Barkha manral seminar on augmented reality.pptBarkha Manral
 
Augmented reality
Augmented realityAugmented reality
Augmented realityNitin Meena
 
Virtual reality report
Virtual reality reportVirtual reality report
Virtual reality reportSujeet Kumar
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.pptAjayPoonia22
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.pptNagulahimasri
 
Virtual Reality and Augmented Reality
Virtual Reality and Augmented RealityVirtual Reality and Augmented Reality
Virtual Reality and Augmented RealityNikitaGour5
 
Hihihihihihihivivivirtual reality.ppt.pptx
Hihihihihihihivivivirtual reality.ppt.pptxHihihihihihihivivivirtual reality.ppt.pptx
Hihihihihihihivivivirtual reality.ppt.pptxfijomiy607
 
Augmented Reality In Education
Augmented Reality In EducationAugmented Reality In Education
Augmented Reality In EducationMohammad Athik
 
Augmented Reality
Augmented Reality Augmented Reality
Augmented Reality Kiran Kumar
 
Virtual Reality Report
Virtual Reality ReportVirtual Reality Report
Virtual Reality ReportRashmi Gupta
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality pptSourav Rout
 

Similar to Major unity mid sem 8th (1) (20)

augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentation
 
virtual reality Barkha manral seminar on augmented reality.ppt
virtual reality Barkha manral seminar on augmented reality.pptvirtual reality Barkha manral seminar on augmented reality.ppt
virtual reality Barkha manral seminar on augmented reality.ppt
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Ijetcas16 208
Ijetcas16 208Ijetcas16 208
Ijetcas16 208
 
Virtual reality report
Virtual reality reportVirtual reality report
Virtual reality report
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt
 
Virtual Reality and Augmented Reality
Virtual Reality and Augmented RealityVirtual Reality and Augmented Reality
Virtual Reality and Augmented Reality
 
Technical seminar report
Technical seminar reportTechnical seminar report
Technical seminar report
 
Hihihihihihihivivivirtual reality.ppt.pptx
Hihihihihihihivivivirtual reality.ppt.pptxHihihihihihihivivivirtual reality.ppt.pptx
Hihihihihihihivivivirtual reality.ppt.pptx
 
Augmented Reality In Education
Augmented Reality In EducationAugmented Reality In Education
Augmented Reality In Education
 
Augmented Reality
Augmented Reality Augmented Reality
Augmented Reality
 
Virtual Reality Report
Virtual Reality ReportVirtual Reality Report
Virtual Reality Report
 
Virtual reality report
Virtual reality reportVirtual reality report
Virtual reality report
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
 
20n05a0418 ppt.pptx
20n05a0418 ppt.pptx20n05a0418 ppt.pptx
20n05a0418 ppt.pptx
 
THE WORLD OF V.pptx
THE WORLD OF V.pptxTHE WORLD OF V.pptx
THE WORLD OF V.pptx
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Sixth Sense Technology
Sixth Sense Technology Sixth Sense Technology
Sixth Sense Technology
 
Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
 

Major unity mid sem 8th (1)

  • 1. MAJOR PROJECT MID-SEM EVALUATION REPORT Topic:INTERIOR DESIGN ANDROID APP BASED ON AUGMENTED REALITY Panel Members: Submitted By: Dr. ChetnaDabasShyamGupta 10103575 Mr. MahendraGurue Vinyas Gupta 10103676 Submitted to: Dr.Dharamveer Singh
  • 2. DECLARATION We hereby declare that this submission is our own work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person nor material which has been accepted for the award of any other degree or diploma of the university or other institute of higher learning, except where due acknowledgment has been made in the text. NAME SIGNATURE: ShyamGupta(10103575) VinyasGupta(10103636)
  • 3. CERTIFICATE This is to certify that the work titled “_____________ App” submitted by Shyan Gupta and Vinyas Gupta of Jaypee Institute of Information Technology University, Noida has been carried out under my supervision. This work has not been submitted partially or wholly to any other University or Institute for the award of this or any other degree or diploma. Signature of Supervisor …..…………………….. Name of Supervisor Dr.Dharamveer Singh Date
  • 4. ACKNOWLEDGEMENT We would like to take this opportunity to thank our major mentor Dr. Dharamveer Singh for his valuable guidance and encouragement throughout the project. He has helped us throughout the project development phase. He has supervised and guided us to complete the project successfully. He has been a great support in solving our difficulties that we faced and in improving the project.
  • 5.
  • 6. 1.Introduction Augmented reality is a live, direct or indirect, view of a physical, real-world environment whose elements areaugmented by computer-generated sensory input such as sound, video, graphics or GPS data . Augmented Reality is a type of virtual reality that aims to duplicate the world's environment in a computer. Virtual reality (VR) is a virtual space in which players immerse themselves into that space and exceed the bounds of physical reality. It adds information and meaning to a real object or place. Augmented reality is characterized by the incorporation of artificial or virtual elements into the physical world as shown by the live feed of the camera, in real-time. Common types of augmented reality include projection, recognition, location and outline. Projection: It is the most common type of augmented reality, projection uses virtual imagery to augment what you see live. Some mobile devices can track movements and sounds with a camera and then respond. Virtual or projection keyboards, which one can project onto to almost any flat surface and use, are examples of augmented reality devices that use interactive projection. Recognition: Recognition is a type of augmented reality that uses the recognition of shapes, faces or other real world items to provide supplementary virtual information to the user in real-time. A handheld device such as a smart phone with the proper software could use recognition to read product bar codes and provide relevant information such as reviews and prices or to read faces and then provide links to a person's social networking profiles. Location: Uses GPS technology to instantaneously provide you with relevant directional information. For example, one can use a smart phone with GPS to determine his location, and then have onscreen arrows superimposed over a live image of what's in front of the user and point him in the direction of where you need to go. This technology can also be used to locate nearby public transportation stations. Outline: Outline is a type of augmented reality that merges the outline of the human body or a part of the body with virtual materials, allowing the user to pick up and otherwise manipulate objects that do not exist in reality. One example of this can be found at some museums and science centers in the form of virtual volleyball. Although the player can stand and move on an actual court, the ball is projected on a wall behind him, and he can control it with an outline of himself, which is also projected on the wall Using the concept of augmented reality our project focuses on creating a very useful android mobile based application. The idea is to allow the user to view the virtual object in the real world. The user could provide images of the object which would be the front, back, top, bottom, and left and right side pictures of the object. They will be placed onto a 3D cube which will make up the complete virtual object. Thus an extended environment will be created through the
  • 7. amalgamation of real world and generated object and it will appear as though the real-world object and virtual object coexist within the environment. In order to use this application, the user will first need to acquire a marker. A marker is a piece of paper with black and white markings. This is used to display the augmented object on your mobile phone‘s screen. Marker-based augmented reality uses a camera and a visual marker which determines the centre, orientation, and range of its spherical coordinate system. Once the marker is present one can view augmented objects. Virtual object interaction is another added feature wherein the user can rotate or change the orientation of the object according to his requirements.
  • 8. VIRTUAL REALITY The term "artificial reality", coined by Myron Krueger, has been in use since the 1970s; however, the origin of the term "virtual reality" can be traced back to the French playwright, poet, actor, and director Antonin Artaud. Virtual Reality is a computerized simulation of natural or imaginary reality. Often the user of VR is fully or partially immersed in the environment. Full immersion refers to someone using a machine to shield herself from the real world. VE is the term used to describe the scene created by any computer program in which the user plays an interactive role within the context of the computer generated three dimensional world. The user represents an actor within the system and has an essential presence within the virtual world. Virtual reality (VR) is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Advantages: Many different fields can use VR as a way to train students without actually putting anyone in harm's way. This includes the fields of medicine, law enforcement, architecture and aviation. VR also helps those that can't get out of the house experience a much fuller life. Doctors are using VR to help reteach muscle movement such as walking and grabbing as well as smaller physical movements such as pointing. The doctors use the malleable computerized environments to increase or decrease the motion needed to grab or move an object. This also helps record exactly how quickly a patient is learning and recovering. Total immersion within environment. Increased user presence perception within system. Facilitated production of an entirely designed environment. Existing technologies for advanced interaction. Real-time graphical environment generation possible.
  • 9. Disadvantages: The hardware needed to create a fully immersed VR experience is still cost prohibitive. The technology for such an experience is still new and experimental. VR is becoming much more commonplace but programmers are still grappling with how to interact with virtual environments. The idea of escapism is common place among those that use VR environments and people often live in the virtual world instead of dealing with the real one. One worry is that as VR environments become much higher quality and immersive, they will become attractive to those wishing to escape real life. Another concern is VR training. Training with a VR environment does not have the same consequences as training and working in the real world. This means that even if someone does well with simulated tasks in a VR environment, that person might not do well in the real world. Not suited to real-world interaction. Despite advances in technology equipment is still expensive. Auxiliary senses not stimulated. Possible lag in motion-display system.
  • 10. III. AUGMENTED REALITY Augmented Reality (AR), also known as Mixed Reality, aims to combine virtual and real scene together to achieve that virtual ones are belong to the real world. Being characteristic of integration of virtual and real scene, many applications of Augmented Reality are emerging, such as in field of education, medical treatment and entertainment. Fig 1: Real desk with virtual lamp and two virtual chairs [5]. Figure 1 shows an example of what this might look like. It shows a real desk with a real phone. Inside this room there are also a virtual lamp and two virtual chairs. Note that the objects are combined in 3-D, so that the virtual lamp covers the real table, and the real table covers parts of the two virtual chairs. AR can be thought of as the "middle ground" between VE (completely synthetic) and telepresence (completely real).
  • 11. Goals of Augmented Reality: To challenge the impossible. To create virtual environment for a more rich user experience. To integrate it into daily lives to help the masses. To achieve feats which are limited in real world. To enhance imagination of youths. Types of Augmented reality: There are two types of simple augmented reality: marker-based which uses cameras and visual cues, and marker less which use positional data such as a mobile's GPS and compass. (Johnson et al, 2010) Marker based Different types of Augmented Reality (AR) markers are images that can be detected by a camera and used with software as the location for virtual assets placed in a scene. Most are black and white, though colours can be used as long as the contrast between them can be properly recognized by a camera. Simple augmented reality markers can consist of one or more basic shapes made up of black squares against a white background. More elaborate markers can be created using simple images that are still read properly by a camera, and these codes can even take the form of tattoos. Fig 2: A Simple Marker
  • 12. A camera is used with AR software to detect augmented reality markers as the location for virtual objects. The result is that an image can be viewed, even live, on a screen and digital assets are placed into the scene at the location of the markers. Limitations on the types of augmented reality markers that can be used are based on the software that recognizes them. While they need to remain fairly simple for error correction, they can include a wide range of different images. The simplest types of augmented reality markers are black and white images that consist of two-dimensional (2D) barcodes. Marker less In marker-less augmented reality the image is gathered through internet and displayed on any specific location (can be gathered using GPS). The application doesn‘t require a marker to display the content. It is more interactive than marker based augmentation. Fig 3:Marker Less AR The only real difference from a consumer‘s perspective is that the surface the object is sculpted on doesn‘t have to have that.
  • 13. IV.MARKER DESIGN , DETECTION AND RECOGNITION METHOD Markers are square and constituting of black thick border and black graphics within its white internal region. The advantage of using black and white colour is to separate marker from background in grabbed frame easily. Internal region of a marker marks identifier of it. In term of projective geometry, square markers in real world could not be square after projecting onto image plane, in other words, internal graphics in markers often display in distortion. When recognizing them, image unwrapping is necessary.[4] The procedure of unwrapping image is shown in Fig.4. Fig 4:Procedure of unwrapping marker image to find ID The calculation of marker unwrapping could be described as follows: as four corners of a marker are acquired after detecting grabbed frame. Positions of four corners are known in the real world as . Homography matrix H could be calculated in (1). By H points in internal region of marker could be unwrapped to formal one.[4] After that, unwrapping image are used to match templates in matching method or decode in code-decoding method respectively.
  • 14. V. PROBLEM STATEMENT The proposed system aims to provide an environment that will help the users to place artificial 2D as well as 3D objects into real world through the use of AR Markers. The proposed system also allows the user to decide, where to place the object in real world. Once the object has been placed in the scene, it will be displayed accurately according to the perspective in the original scene, which is especially challenging in the case of 3D virtual objects. The proposed system solves the problem of viewpoint tracking and virtual object interaction. The main advantage of the proposed system is that, it is customer oriented and not product or service oriented thus allowing the users to augment a product of their wish. There would be option to move the Virtual object in the virtual space with a marker. VI.WHY ANDROID OS Since the advent of 2010 more and more stress has been given on the usage of Free and Open Source Software (FOSS). Android is leading the current O.S market as shown in figure 5, because it is open source and developed by a consortium of more than 86 leading M.N.C‘s called Open Handset Allowance (O.H.A). Android also is stated as one the most rapidly growing technologies. More and more applications have been developed and modified by third party user. Fig 5: Smartphone OS market shares Moreover, the Android O.S is user friendly. It has a great performance and processing power. Thus, the proposed system is being developed for the most rapidly emerging and flexible O.S- ―ANDROID‖
  • 15. VII. PROPOSED SYSTEM ARCHITECTURE Fig 6: Architecture Block diagram An AR application is composed of the following core components: Camera The camera component ensures that every preview frame is captured and passed efficiently to the tracker. The developer only has to initialize the camera to start and stop capturing. The camera frame is automatically delivered in a device-dependent image format and size. Image Converter The pixel format converter converts from the camera format (e.g., YUV12) to a format suitable for OpenGL ES rendering (e.g., RGB565) and for tracking (e.g., luminance) internally. This conversion also includes downsampling to have the camera image in different resolutions available in the converted frame stack. Tracker The tracker component contains the computer vision algorithms that detect and track real-world objects in camera video frames. Based on the camera image, different algorithms take care of detecting new targets or markers and evaluating virtual buttons. The results are stored in a state object that is used by the video background renderer and can be accessed from application code. The tracker can load multiple datasets at the same time and activate them.
  • 16. Video Background Renderer The video background renderer module renders the camera image stored in the state object. The performance of the background video rendering is optimized for specific devices. Application Code Application developers must initialize all the above components and perform three key steps in the application code. For each processed frame, the state object is updated and the applications render method is called. The application developer must: 1. Query the state object for newly detected targets, markers or updated states of these elements 2. Update the application logic with the new input data 3. Render the augmented graphics overlay Device Databases Device databases are created using the online Target Manager. The downloaded device target database assets contain an XML configuration file that allows the developer to configure certain trackable features and a binary file that contains the trackable database. These assets are compiled by the application developer into the app installer package and used at runtime by the Vuforia SDK. Cloud Databases Cloud databases can be created using the Target Manager or using the Vuforia Web Services API. Targets are queried at runtime of the application using the cloud recognition feature that performs a visual search in the cloud using sent camera images. In addition to the target data, the provisioned targets can contain metadata which is returned upon query. User-Defined Targets A fundamentally different supported approach is the user-defined targets feature. Rather than preparing targets outside of the developer, this feature allows for creating targets on-the-fly from the current camera image. A builder component is called to trigger the creation of a new user-target. The returned target is cached, but retained only for a given AR session.
  • 17. PROJECT CONSTRAINTS Augmented reality still has some challenges to overcome. Augmented Reality systems are expected to run in real- time so that a user will be able to move freely within the scene and see a properly rendered augmented image. The application will be built for mobile phones which usually have low screen dimensions and resolution. It also adds additional stress on the O.S because it requires high processing power to augment. Developers of the application are supposed to have a thorough knowledge of Android O.S (Applications developed for) and Windows O.S (Application developed in). Developers are also supposed to be familiar with ADK (Android Development Kit) and Eclipse. APPLICATION AREAS A picture is worth thousand words. The applications of this project are well understood from below snaps which shows virtual objects in real world environment. Medical Science [11] Fashion Gaming [13] Product Information [14]
  • 18. Literature Survey 2.1 List all the sources for formulation of problem statement Books: Lester Madden Professional Augmented Reality Browsers for Smartphones: Programming for junaio, Layar and Wikitude (Wrox Programmer to Programmer) Publication Date: June 7, 2011 | ISBN-10: 1119992818 | ISBN-13: 978-1119992813 | Edition: 1 Tools: ● Unity3d Pro ● Qualcomm Vuforia SDK ● Android Development Tools ● Eclipse ● Samsung GT-N7100 2.2Summary of relevant papers 1. Title of paper : Interior Design in Augmented Reality Environment Authors Viet ToanPhan Year of Publication: 2010 Summary: This paper proposes a marker based augmented reality application using Android operating system which will help to combine virtual objects with the real environment facilitating various applications as mentioned in this paper. The main advantage is use of low cost devices as compared to the costly head mounted display devices. Secondly with the help of this project you need not buy product and then see how it will suit your environment. In future images of objects from various views can be fetched directly from vendor‘s websites; same could be modelled into a 3D objects and augmented. Also multiple objects will be augmented which is currently a major challenge
  • 19. Web link: http://www.ijcaonline.org/volume5/number5/pxc3871290.pdf 2. Title of paper: A Study on Tangible AR for interior design Authors: Viet ToanPhan and Prof. SeungYeonChoo Year of Publication: 2010 Summary: This research examined virtual furniture and adjustment work to create a new design method using Augmented Reality technology for Interior Design education. In particular, AR technology opens up many new research fields in engineering and architecture. In an AR environment, design work can become more lively, convenient, and intelligent. Plus, design work and manufacturing can be conducted at the same time and I close relationship with each other. With AR, the virtual products of graphic technology are not only for simulation but also obtain real higher values. Furthermore, ARtechnology can become a new animated simulation tool for interior design, allowing the user to see a mixed AR scene through HMD, video display, or PDA. It is also anticipated that the interactive potential can be increased according to the user‘s needs. Web link: http://www.cse.iitb.ac.in/~pb/cs626-449-2009/prev-years-other-things- nlp/sentiment-analysis-opinion-mining-pang-lee-omsa-published.pdf. 3. Title of paper:Handheld Augmented Reality in Civil Engineering Authors: Gerhard Schall Year of Publication: 2009 Publishing details: Comprehensive exam paper Summary: An overview of current developments in handheld Augmented Reality in civil engineering was given. Furthermore, a location- and context-aware handheld AR system was presented, which addresses the workflow optimisation of common field tasks with utilities. By means of a more intuitive way of information conveying remarkable time saving can be achieved employing such a system. Potential fields of application were outlined. A first fully functional prototype of our handheld AR system Vidente is
  • 20. available and provides a set of tools for direct user interaction with the presented information on buried utility assets. Further improvements will focus on advanced global tracking using DGPS / RTK-GPS employing terrestrial correction data services while keeping an ergonomically acceptable form factor. Web link: http://www.icg.tu-graz.ac.at/Members/schall/rosus/download 4. Title of paper:Marker Based Augmented Reality Using Android OS Authors: Mr. Raviraj S. Patkar, Mr. S. PratapSingh,Ms. Swati V. Birje Year of Publication:2013 Summary:Augmented Reality or AR is an emerging technology in which one‘s perception of the real-time environment is enhanced by superimposing computer- generated information such as graphical, textual, or audio content, as well as objects onto a display screen. The proposed application is an android mobile based application which will be compatible with all the existing and upcoming versions of the operating system. The idea is to allow the user to view the virtual object in the real world using a marker based AR system. The user could provide images of the object which would be the front, back, top, bottom, left and right side pictures of the object. They will be placed onto a 3D cube which will make up the complete virtual object. Thus an extended environment will be created through the amalgamation of real world and generated object and it will appear as though the real-world object and virtual object coexist within the environment. The advantages of this application as compared to the already existing 2D application are that it would display object in 3D and enable the user to rotate it virtually. It is inexpensive as the user nee d not actually purchase the object to see how it fits in the environment, instead he can try before the purchase itself. .Web link:http://cs.stanford.edu/courses/cs224n/2009/fp/16.pdf
  • 21. 2.3 Summary of field survey The Interior Design Domain The world of virtual objects in an AR application incorporates the information that makes sense at the application level and is relevant to the users. Conceptually, this is a shared data base of logical objects, the ―model‖. Because of the replicated architecture, the model is available as a copy in each instance of the application. The structure and type of the model is dependent on the application. In the interior design example we deal with a model that stores geometric data for a set of furniture. Each model object represents a piece of furniture, and maintains geometric transformations and visual attributes relevant to the object. The model objects are organized hierarchically, so that it is possible to select and interact with groups of furniture. An interactive representation of the model in the user interface is called a ―view‖. Views are created based on a specific interpretation of the model information. The interpretation is determined by the type of view, the type of model data, and the context of the interface. As an example, consider the furniture model of the interior design application and two views, a graphics rendering of the furniture and a browser for the items in the model. The rendering creates geometric primitives appropriate for the type of furniture, and geometric transformation and attributes are used directly to customize the presentation. On the other hand the browser creates a list of labels taken from the model objects and ignores all geometric information. Both views have presentation parameters that are not bound by the model interpretation and can be used for local customization of the interface. The interpretation is not so straightforward for non-geometric model information. Highlighting a view object, for example, can be used to indicate a current selection in the model, but different views are likely to use different methods to show the highlight. This type of feedback becomes more complicated in multi-user applications with local selections at each
  • 22. site. In a distributed application, information about the state and actions of remote users becomes part of the model. It is important to give each user an awareness of who is participating and what other participants are doing. The interior design application makes use of an object browser that is capable of showing remote selections. In general, it can be a challenging task for the interface builder to find a concrete visualization for some abstract structure or behavior in the model. The model-view mechanism is well known in the area of user interface construction, and used to implement a separation between interface and application functionality. The importance of separability for modularity and independent development of interface components has been recognized even in non-distributed environments, where it is considered good software design. Yet separability becomes an indispensable architectural feature for distributed interfaces, at least for those with a need for object-level sharing. By using the model-view paradigm, we achieve the necessary independence between the conceptual objects of the global model and the objects and manipulations of a particular interactive view. New tools: AR Tools: 1. Qualcomm Vuforia SDK: A very sophisticated SDK for handelling AR based apps with support for various objects. 2. Unity3d Pro :Used instead of OpenGL renderer,to render graphics and scenes to the device. 3. ARCamera Prefab: Replacing the main camera can ensure that we can use the device camera as an input device. 4. Node.js: For networking and connections management to our cloud database. 5. Open Source Cloud Service like Hadoop, CloudFoundry: for storing image targets database.
  • 23. List some relevant current/open problems. Environment: How can we deal with dynamic aspects (color, illumination) of environments? While (indirectly) some work has been performed on visual patterns, in general the structure, colors, and illumination conditions in an environment are ignored or adapted for manually. For example, dynamically adaptable color schemes that adjust to the environment conditions could be of great benefit to solve some of the object segmentation and depth problems that are caused by the environment. Capturing: How do high-definition and HDR cameras coupled with improved display resolution change perception on small devices? These camera types are currently attracting interest: they are suitable for solving perceptual problems associated with reso- lution mismatches, and the improvement of the color gamut and contrast. However, the perceptual consequences of using HDR cameras with non-HDR displays should be carefully studied, since skewed colors can be counterproductive. Capturing: How can we design systems with dynamic FOV, and what effects do they have? The FOV mismatch introduced by using wide-angle lenses with small FOV displays causes scene distortion. This could be addressed through dynamic FOV (e.g., by using liquid lens technology). Similarly, (software) methods that adapt to the actual position of the eye relative to the display could prove useful. It is unknown, though, if such methods are achievable and if they will cause perceptual disturbances. Augmentation: How can we further improve AR methods to minimize depth-ordering problems? X-ray vision is useful to look through objects in the real scene. However, depth ordering and scene understanding in such systems still requires improvement: one direction that may yield benefits is multi-view perception. Similarly, label placement in highly cluttered environments still suffers from depth ordering problems. Layout and design can also be improved—apt associations need to be implemented that uni- quely bind a label to an object. Cues that specify potentially dis-ambiguating information related to the real world (e.g., a street address) might be one possibility in cluttered city environments.
  • 24. Display: Can we parameterize video and rendering quality to pixel density, to support ―perceptually correct‖ AR? In particular, improvements in camera capturing quality and pixel density will make it possible to use very high resolution imagery on very small screens, but, to what extent do we need to change the image‘s visual representation to maximize its understandability? Additionally, what is the maximum disparity between video and rendering resolution before noticeable perceptual problems arise? And, is it possible to parameterize the offset effects between video and rendering, for example with respect to mismatches or abstrac- tions? Finally, how much rendering fidelity is truly needed? For example, depth does not seem to be affected much by fidelity (see Section 4.3, rendering and resolution mismatch). Display: What is the weighting of perceptual issues among dif- ferent display devices? One of the most pressing questions is the actual effect each problem has on the various display types: com- parative evaluations are required to generate a per-device weight- ing of perceptual problems, which would be particularly useful for determining those problems that should be tackled first. In the next section, we provide an initial overview of the differences between the various platforms. User: What are the effects of the dual-view situation on percep- tion and cognition in AR systems? In particular, handheld and see- through devices introduce a dual view situation, which may help to verify ambiguous cues obtained from display content. Howev- er, its true effects are unknown; for example, disparity plane switching is expected to be counterproductive, but are the advan- tages of dual-view more important, and, how could we possibly minimize the effects of disparity plane switching? User: What are the effects of combinations of these problems on the perceptual pipeline? A single problem can have effects on different stages, as evidenced by our repeated mentions of some issues in multiple sections; for example, sunlight can make captur- ing, display, and user perception difficult. What may be even more important is the actual combination of problems that accumulate through the pipeline: for instance, low-resolution capturing may affect multiple subsequent stages in the perceptual pipeline, and problems may become worse at each stage. The question is how much the accumulation affects perceptual problems on dif- ferent platforms.
  • 25. 3.5 Task division among group members Task Done by Finding and reading papers and books Vinyas and Shyam Node.js Shyam Cloud Services and their usage Shyam and Vinyas Unity3d Shyam and Vinyas Vuforia SDK Shyam and Vinyas 3DS MAX Shyam GUI Vinyas Accuracy and other measurements Vinyas
  • 26. 4. Analysis, Design and Modeling 4.1 Overall description of the project In the case of interior design, the designer essentially applies the three basic principles of interior design: color, scale, and proportion within a predetermined space. Thus, the proposed AR system is focused on giving the user the flexibility to design using these three basic principles. Therefore, in the proposed AR environment, the user is able to adjust the properties of virtual furniture and create different arrangements in a real environment. The era of mobile web application has just started, and there is a long way for it to march. Development of mobile web application will be emphasized on following aspects: 1) More and more sensors will be added to mobile phones, so new APIs to use those capabilities will bring brand new applications to users. 2) Multimedia capabilities will be enhanced and engine will support more types of multimedia such as flash and svg . 3) The dedicated Integrated Development Environment (IDE) will be improved to accelerate the applications‘ development. Visualization programming and JavaScript debugging will be the most important functions of the IDE.
  • 27. 4.2 Functional requirements and Non Functional requirements4.2.1 Functional requirements Functional requirements define the fundamental actions that must take place in the software in accepting and processing the inputs and in processing and generating the outputs.
  • 28. Non - Functional requirements 4.2.2.1 Error handling ● product shall handle expected and non-expected errors in ways that prevent loss in information and long downtime period. 4.2.2.2 Performance Requirements ● User should be able to enter a multiple line review. ● User must be able to try the system for more than one review in one run. 4.2.2.3 Safety Requirements ● System use shall not cause any harm to human users. 4.2.2.4 Security Requirements ● System will use secured database . ● Any user can just read information but they cannot edit or modify anything except the information he/she enters.. 4.2.2.5 Reliability The system should insure that the user actions are performed correctly as the user requires: ● System should be as accurate as possible and easy to use. ● System must not give an output for empty input, the system should be able to recover and react robustly with the error, by sending a message back to the user. . 4.2.2.6 Correctness:- All algorithms implemented in the system should be correct which means that they should be performed as required. The testing Phase insures correctness of the software by trying all possible Case and matching their output with the documentation.
  • 29. Dependency Details Software Requirements: Component Version JDK Java SE 7u25 Eclipse IDE Latest version Android SDK Downloader Android SDK Tools revision 22 Android ADT Latest version that is for SDK tools rev 22 Android SDK Tools Android SDK Tools revision 22 Android SDK platform support Android SDK Platform tools revision 17 Cygwin Environment Latest version 1.7.20-1 Android NDK Android NDK r8e Microsoft XBOX KINECT v2.18
  • 30. Hardware Requirements: The only hardware is computer systems which will act as server, data centre, front end for the user ● PC 2.0Ghz or higher. ● 3GB Ram or higher. ● 10 GB disc Space or higher. ● Internet Access. ● Operating System – Win Vista or higher ● Depth sensing camera like Kinect or PrimaSense. ● An android device with a good camera,preferably with Auto focus. Testing Testing Plan A. Plan Table 1: Testing Plan Type of Test Test Performed (Yes/No) Comments Software Component Requirements Testing Yes Modules and the integrated system were tested to ensure it meets all the requirements. 1. Initialization 2. Camera focus 3. Marker detection 4. Object tracking Unit Testing Yes Modules were developed as a 1. Single marker implementation
  • 31. separate unit, and tested so as to avoid errors migrating in the system after Integration. 2. Initial Test 3. Frame Marker Implementation Integration Testing Yes Developing all modules separately makes integration takes testing an essential requirement. 1. ImageTarget 2. UserDefined Target 3. Cloud Reco 4. Touch Events Performance Testing No At current stage, focus was on meeting the requirements and feature development. GUI Testing Yes Being a Web Application, the GUI is almost as much important as any other feature development. 1. UserDefined Target Menu Security Testing No At current stage feature development is given more preference. Stress Testing No At current stage, focus was on meeting the requirements and feature development. Load Testing No At current stage, focus was on meeting the requirements and feature development.
  • 32. B. Team Details Table 2: Test Team Details Role Name Specific Responsibilities Test Case Design and Development Vinyas & Shyam Planning and executing various test cases for Functional and Non- Functional Points of the Web Site. Identifying the appropriate techniques and guidelines to implement the required tests Testing Vinyas & Shyam Responsible for conducting unit, integration and GUI testing for the website. Debugging Shyam Finding and Correcting Errors found after running the test cases. C. Test Schedule Table 3: Test Schedule Activity Start Date Completion Date Hour Comments ImageTarget Apr 19, 2014 Apr 19, 2014 1 hrs. Successfully Done Camera Integration Apr 19, 2014 Apr 19, 2014 1 hrs. Successfully Done Initial User Test Apr 19, 2014 Apr 19, 2014 3 hrs. Successfully
  • 33. Done Test User Defined Module Apr 20, 2014 Apr 20, 2014 3 hrs. Successfully Done Video Playback Apr 20, 2014 Apr 20, 2014 2 hrs. Successfully Done Content– Selection Apr 20, 2014 Apr 20, 2014 1 hrs. Successfully Done Content Sequencing Apr 20, 2014 Apr 20,2014 2 hrs. Successfully Done Touch Drag and Drop Apr 21, 2014 Apr 21, 2014 4 hrs. Successfully Done Model Selection and Rendering Apr 21, 2014 Apr 21, 2014 3 hrs. Successfully Done VS Movement Apr 21, 2014 Apr 21, 2014 2 hrs. Successfully Done
  • 34. ● 4.5 Risk Analysis and Mitigation Plan Ris k ID Description of Risk Risk Area Prob. ( P) Impact (I) RE (P*I) Risk Selected for mitigatio n(S) (Y/N) Mitigation plan if S is ‗Y‘ Contingency plan if any 1 Marker not visible Requirem ent Risk M(3) H(5) 15 Y Adjust Camera of the device to accommodate the marker 2 Same marker multiple instances Project Scope L(1) H (5) 5 Y Remove one of the marker as only one marker can be associated with a model 3 Flash on camera Hardware Rsk M(3) H (5) 15 Y Flash should be kept off, as it can create problems in detection of marker 4 Occulancy Management Project Scope L(1) M (3) 3 N Occulancy is managed,can place the
  • 35. camera input a bit higher to enable focus on target 5 Unity WEB player not initialized Software risk M(3) H (5) 15 Y The android devices should have Unity web player installed. . 6 App does not run/compatibility issues Requireme nt Risk L(1) M(3) 3 N In this situation the softwares that are required to run the program will be checked and reinstalled (if any) to successfully run the program. 7 Software packages integration → part of code doesn‘t run Developm ent Environme nt Risk H(5) H(5) 25 Y Implemented each part separately and successfully ,and after combining
  • 36. thoroughly tested the system. 8 Lack of knowledge on project domain → improper implementation and testing Testing environme nt risk and Personnel Related H(5) H(5) did adequate amount of literature survey , research and discussion with faculty to minimize the risk. 9 Kinect not initialized with Unity Hardware Risk H(5) M(3) 15 Y Kinect plugin for Unity should be initialized before compiling the package 10 User Defined Target Builder Testing environme nt risk H(5) H(5) 25 N The selected target should have a proper pattern so as to get the proper trackable image.
  • 37. References [1] Lester Madden Professional Augmented Reality Browsers for Smartphones: Programming for junaio, Layar and Wikitude (Wrox Programmer to Programmer) Publication Date: June 7, 2011 | ISBN-10: 1119992818 | ISBN-13: 978-1119992813 [2] Dominic Cushnan Developing AR Games for Android and iOS Publication Date :September 2013|ISBN-10:1783280034 [3] Interior Design in Augmented Reality Environment ,Viet Toan Phan| Year of Publication: 2010 |Web link: http://www.ijcaonline.org/volume5/number5/pxc3871290.pdf [4] A Study on Tangible AR for interior design ,Viet Toan Phan and Prof. Seung Yeon Choo| Year of Publication: 2010 |Weblink:http://www.cse.iitb.ac.in/~pb/cs626-449-2009/prev-years-other- things-nlp/ sentiment-analysis-opinion-mining-pang-lee-omsa-published.pdf [5] Handheld Augmented Reality in Civil Engineering ,Gerhard Schall |Year of Publication: 2009| Web link: http://www.icg.tu-graz.ac.at/Members/schall/rosus/download [6] Marker Based Augmented Reality Using Android OS ,Mr. Raviraj S. Patkar, Mr. S. Pratap Singh,Ms. Swati V. Birje Year of Publication: 2013|Web link: ttp://nlp.stanford.edu/courses/cs224n/2009/fp/16.pdf [7 ] https://developer.vuforia.com/resources/dev-guide/knowledge-base-articles [8 ] General-purpose systems for effective construction simulation -JC Martinez, PG Ioannou - Journal of construction engineering