A lecture on research directions in Augmented Reality as part of the COSC 426 class on AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
3. The Future is with us
It takes at least 20 years for new technologies to go
from the lab to the lounge..
“The technologies that will significantly affect our lives
over the next 10 years have been around for a
decade.
The future is with us. The trick is learning how to spot it
us it.
The commercialization of research, in other words,
is far more about prospecting than alchemy.”
Bill Buxton
Oct 11th 2004
5. Research Directions
Components
C
Markerless tracking, hybrid tracking
Displays, i
Di l input d i
t devices
Tools
Authoring tools, user generated content
tools
Applications
Interaction techniques/metaphors
Experiences
User evaluation novel AR/MR experiences
evaluation,
7. Occlusion with See-through HMD
The P bl
Th Problem
Occluding real objects with virtual
Occluding virtual objects with real
Real Scene Current See-through HMD
8. ELMO (Kiyokawa 2001)
Occlusive see-through HMD
Masking LCD
Real time range finding
10. ELMO Design
Virtual images
from LCD
Depth
Sensing
g LCD Mask
Real
World
Optical
Combiner
Use LCD mask to block real world
Depth sensing for occluding virtual images
14. Mobile BuildAR
Ideal
Id l authoring tool
h l
Develop on PC, deploy on handheld
AR Scene
PC
AR Player
PC
BuildAR XML Mobile Phone Edgelib
stbES
Symbian/WM
S bi /WM
Python
17. Future Directions
SLIDE 17
Interaction Techniques
Input techniques
3D vs. 2D input
Pen/buttons/gestures
P /b /
Collaboration techniques
Simultaneous access to AR content
User studies…
24. Lucid Touch
Microsoft Research & Mitsubishi Electric Research Labs
M f R h M b h El R hL b
Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C.
LucidTouch: A See-Through Mobile Device
In Proceedings of UIST 2007, Newport, Rhode Island, October 7-10, 2007, pp. 269–
278.
25.
26. Auditory Modalities
Auditory
auditory icons
earcons
speech synthesis/recognition
Nomadic Radio (Sawhney)
- combines spatialized audio
bi ti li d di
- auditory cues
- speech synthesis/recognition
27. Gestural interfaces
1. Micro-gestures
(unistroke, smartPad)
2. Device-based
2 D i b d gestures
(tilt based examples)
3.
3 Embodied interaction
(eye toy)
28. Haptic Modalities
Haptic interfaces
Simple uses in mobiles? (vibration instead of ringtone)
Sony’s Touchengine
- physiological experiments show you can perceive two stimulus 5ms apart, and spaced as
low as 0.2 microns
4 μm
n層
28 μm
n層
V
31. Multimodal Input
Combining speech and gesture builds on the strength of
each
Speech – mode selection, group selection
p g p
Gesture – direct manipulation
Key problem
Command disambiguation
- “Move that chair” - which chair?
Use statistical methods for disambiguation
Speech and gesture recognition provide multiple possibilities –
need to look for most probable
SenseShapes detect object of interest (Olwal 2003)
Olwal 2003
39. Example: Visualizing Sensor Networks
Rauhala et. al. 2007 (Linkoping)
Network of Humidity Sensors
ZigBee wireless communication
Use Mobile AR to Visualize Humidity
40.
41.
42. Example: Sensor Input for AR Interaction
UbiComp sensor
Light, temp, motion, sound
RF connection
AR software plug-in
Sensor input interacting with AR
applications
43. uPart USB Bridge
Particle
http://particle.teco.edu idle: 16 Hour
49. UCAM: Architecture
wear-UCAM wearService
Light
<Service> IR receiver
BioSensor
User
Conditional
UserProfileManager Context #2 ubiTrack
< wearService >
Content
wearSensor
User
Conditional
Context #3 MRWindow
<Service>
Sensor User
Service
Conditional
Context #1
ubiTV
<Service>
(Integrator,Manager,
Interpreter,ServiceProvider) Media services Light service
ubiTrack
Tag-it
Context Interface What/When/How Where/When
PDA Couch Sensor Door Sensor
Network Interface Who/What/When/How When/How When/How
ubi-UCAM
BAN/PAN TCP/IP
(BT) (Discovery,Control,Event)
Operating System
O ti S t
vr-UCAM
50. Ubiquitous
UbiComp
Ubi AR
Ubi VR
Weiser Mobile AR
Desktop AR VR
Terminal
Reality Virtual Reality
Milgram
From: Joe Newman
51. Future Directions
SLIDE 51
Massive Multiuser
Handheld AR for the first time allows extremely high
numbers of AR users
Requires
R i
New types of applications/games
New infrastructure (server/client/peer to peer)
(server/client/peer-to-peer)
Content distribution…
57. Leveraging Web 2 0
L i W b 2.0
Content retrieval using HTTP
g
XML encoded meta information
KML placemarks + extensions
Queries
Based on location (from GPS, image recognition)
Based on situation (barcode markers)
Queries l deliver tracking f
Q i also d li ki feature d b
databases
Everybody can set up an AR 2.0 server
Syndication:
y
Community servers for end-user content
Tagging
AR client subscribes to arbitrary number of feeds
58. Content
Content creation and delivery
Content creation pipeline
Delivering previously unknown content
Streaming of
Data (objects multi-media)
(objects, multi media)
Applications
Distribution
How do users learn about all that content?
How do they access it?
H d th
59. Twitter 360
Twitter 360 http://www.twitter-360.com
p
AR to geo-locate Tweets around you
Better than Google maps?
g p
60. Scaling Up
AR on a City Scale
y
Using mobile phone as ubiquitous sensor
MIT Senseable City Lab
http://senseable.mit.edu/