The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
This document provides an overview of biometrics technologies. It begins with an introduction to biometrics and then discusses the history of biometrics from ancient Egyptians and Chinese using fingerprints to modern systems being developed in the 1970s. The document outlines key characteristics biometrics must have such as universality and permanence. It then classifies and describes various biometric technologies including fingerprint, face, iris, voice, and signature recognition. Application examples are presented for areas like gaming, television control, and accessibility switches. The document concludes that biometrics provide a user-friendly way to interact with devices without passwords while continuing to develop as an emerging field.
Lecture 4 from the COMP 4010 course on AR/VR. This lecture reviews optical tracking for AR and starts discussion about interaction techniques. This was taught by Mark Billinghurst at the University of South Australia on August 17th 2021.
Lecture 10 in the COMP 4010 Lectures on AR/VR from the Univeristy of South Australia. This lecture is about VR Interface Design and Evaluating VR interfaces. Taught by Mark Billinghurst on October 12, 2021.
Lecture 9 of the COMP 4010 course on AR/VR. This lecture is about AR Interaction methods. Taught on October 2nd 2018 by Mark Billinghurst at the University of South Australia
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Virtual, augmented, and mixed reality technologies were discussed. Virtual reality immerses users in simulated environments while augmented reality enhances the real world with computer-generated perceptions. Mixed reality merges real and virtual worlds. Augmented reality was defined and examples of marker-based and markerless augmented reality were provided. Applications of augmented reality discussed included medical, entertainment, education, and more. Both advantages such as improved learning and interaction, and disadvantages including privacy concerns were noted.
The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
This document provides an overview of biometrics technologies. It begins with an introduction to biometrics and then discusses the history of biometrics from ancient Egyptians and Chinese using fingerprints to modern systems being developed in the 1970s. The document outlines key characteristics biometrics must have such as universality and permanence. It then classifies and describes various biometric technologies including fingerprint, face, iris, voice, and signature recognition. Application examples are presented for areas like gaming, television control, and accessibility switches. The document concludes that biometrics provide a user-friendly way to interact with devices without passwords while continuing to develop as an emerging field.
Lecture 4 from the COMP 4010 course on AR/VR. This lecture reviews optical tracking for AR and starts discussion about interaction techniques. This was taught by Mark Billinghurst at the University of South Australia on August 17th 2021.
Lecture 10 in the COMP 4010 Lectures on AR/VR from the Univeristy of South Australia. This lecture is about VR Interface Design and Evaluating VR interfaces. Taught by Mark Billinghurst on October 12, 2021.
Lecture 9 of the COMP 4010 course on AR/VR. This lecture is about AR Interaction methods. Taught on October 2nd 2018 by Mark Billinghurst at the University of South Australia
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Virtual, augmented, and mixed reality technologies were discussed. Virtual reality immerses users in simulated environments while augmented reality enhances the real world with computer-generated perceptions. Mixed reality merges real and virtual worlds. Augmented reality was defined and examples of marker-based and markerless augmented reality were provided. Applications of augmented reality discussed included medical, entertainment, education, and more. Both advantages such as improved learning and interaction, and disadvantages including privacy concerns were noted.
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
Lecture 9 of the COMP 4010 course in AR/VR from the University of South Australia. This was taught by Mark Billinghurst on October 5th, 2021. This lecture describes VR input devices, VR systems and rapid prototyping tools.
COMP 4010 Lecture 9 providing an overview of Augmented Reality Technology. Taught by Mark Billinghurst on October 8th 2019 at the University of South Australia.
Gesture recognition technology allows for control of devices through hand and body motions. It works by using cameras, sensors and algorithms to interpret gestures and movements. Key applications include controlling smart TVs with hand motions, sign language translation, and assisting disabled individuals. Challenges include variations between individuals, reading motions accurately due to lighting and noise, and lack of standardized gesture languages.
Augmented reality enhances one's current perception of reality by superimposing computer-generated images over a user's view of the real world. The goal of AR is to enhance performance and perception while making it difficult to distinguish between real and virtual elements. AR works by adding virtual objects to real world scenes and potentially removing real world objects. Key components include devices that can project virtual enhancements onto the real world. Applications span industries like aviation, business, education, and healthcare. While AR augments reality, virtual reality aims to replace it with a fully immersive computer-generated environment. AR may become widely used in daily life through new interaction interfaces.
Lecture 12 in the COMP 4010 course on AR/VR. This lecture was about research directions in AR/VR and in particular display research. This was taught by Mark Billinghurst on September 26th 2021 at the University of South Australia.
The document discusses hand gesture recognition. It defines what gestures are and how gesture recognition works by interpreting human gestures through mathematical algorithms. This allows humans to interact with machines naturally without devices. Examples of applications include controlling a smart TV with hand movements and using gestures for gaming. The document outlines the hardware and software needed for gesture recognition, including a webcam, processor, RAM, and operating system. It also provides an overview of the module structure involved in identifying and applying gestures as inputs.
The final lecture in the 2021 COMP 4010 class on AR/VR. This lecture summarizes some more research directions and trends in AR and VR. This lecture was taught by Mark Billinghurst on November 2nd 2021 at the University of South Australia
There are three main types of authentication: something you know, something you have, and something you are. Biometrics uses biological and behavioral characteristics to identify individuals, such as fingerprints, iris patterns, voice, gait, and signatures. Some common biometric technologies are fingerprint, face, iris, vein, voice, and signature recognition. Biometrics can be used for applications like access control, time/attendance tracking, airports, ATMs, and more. While biometrics provide security benefits, they also have disadvantages like cost, accuracy issues, and privacy concerns. The field continues to evolve as costs decrease and convenience increases.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
This document discusses empathic computing and collaborative immersive analytics. It notes that while fields like scientific and information visualization are well established, little research has looked at collaborative visualization specifically. Collaborative immersive analytics combines mixed reality, visual analytics and computer-supported cooperative work. Empathic computing aims to develop systems that allow sharing experiences, emotions and perspectives using technologies like virtual and augmented reality with physiological sensors. Applying these concepts could enhance communication and understanding for collaborative immersive analytics tasks.
This document discusses gesture recognition. It begins by introducing gesture recognition and its evolution from graphical user interfaces using mice and keyboards. It then defines different types of gestures including iconic, deictic, metaphoric, and beat gestures. The document outlines the basic working of a gesture recognition system and different types of gesture sensing technologies like hand gesture recognition, facial gesture recognition, sign language recognition, and vision-based techniques. It discusses input devices used for gesture tracking and various applications of gesture recognition like socially assistive robotics, sign language translation, virtual controllers, and remote control. Finally, it addresses challenges in gesture recognition like lack of a universal gesture language and issues with robustness.
BCI or DNI is a direct communication pathway between an enhanced or wired brain and an external device. DNIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
This PowerPoint presentation discusses multitouch interaction technology. It provides an overview of hardware, software, user interfaces, market applications, gesture types, and implementation of multitouch. It describes several touch screen technologies including capacitive, resistive, surface acoustic wave, infrared, and optical. Examples of multitouch gestures like tap, pan, pinch zoom are presented. Current and future uses and markets of multitouch include interactive displays, tables, mobile devices. Research continues to enhance multitouch with 3D interaction and larger surfaces.
This document provides an introduction to extended reality technologies from Mark Billinghurst, the director of the Empathic Computing Lab at the University of South Australia. It outlines Billinghurst's background and research interests. It then provides an overview of the class, including assignments, equipment available, and the lecture schedule. The lecture schedule covers topics such as augmented reality, virtual reality, the metaverse, and the history of AR/VR.
Lecture 8 of the COMP 4010 course taught at the University of South Australia. This lecture provides and introduction to VR technology. Taught by Mark Billinghurst on September 14th 2021 at the University of South Australia.
A lecture on VR systems and graphics given as part of the COMP 4026 AR/VR class taught at the University of South Australia. This lecture was taught by Bruce Thomas on August 20th 2029.
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
Finger print sensor and its applicationArnab Podder
This document presents a seminar on fingerprint sensors and their applications. It discusses the overview and working of fingerprint sensors, including the different types of fingerprint patterns and sensors. Some key applications of fingerprint sensors mentioned are voter registration, border control, device security, and digital payments. The document outlines advantages such as security and ease of use, as well as disadvantages relating to image quality. It concludes by discussing future applications of fingerprint authentication in areas like banking and government services.
COMP 4010 Lecture 6 on Virtual Reality. This time focusing on Interaction Design for VR and rapid prototyping tools. Taught by Bruce Thomas at the University of South Australia on September 3rd 2019. Slides by Mark Billinghurst
This document discusses interaction design principles and processes for designing virtual reality interfaces. It begins by defining interaction design and discussing needs analysis methods like learning from users, analogous settings, and experts. Ideation techniques like brainstorming and sketching VR interfaces are presented. Design considerations like affordances, metaphors, and physical ergonomics are covered. Prototyping tools like Sketchbox, A-Frame and Unity EditorVR are introduced. The document concludes by discussing evaluation methods like usability testing and field studies.
Augmented Reality in Multi-Dimensionality: Design for Space, Motion, Multiple...Shalin Hai-Jew
Augmented reality (AR)—the use of digital overlays over physical space—manifests in a wide range of spaces (indoor, outdoor; virtual) and ways (in real space (with unaided human vision); in head gear; in smart glasses; on mobile devices, and others). There are various authoring technologies that enable the making of AR experiences for various users. This work uses a particular tool (Adobe Aero®) to explore ways to build AR for multiple dimensions, including the fourth dimension (motion, changes over time).
Based on the respective purposes of the AR experience, some basic heuristics are captured for
space design (1),
motion design (2),
multiple perception design (sight, smell, taste, sound, touch) (3),
and virtual- and tangible- interactivity (4).
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
Lecture 9 of the COMP 4010 course in AR/VR from the University of South Australia. This was taught by Mark Billinghurst on October 5th, 2021. This lecture describes VR input devices, VR systems and rapid prototyping tools.
COMP 4010 Lecture 9 providing an overview of Augmented Reality Technology. Taught by Mark Billinghurst on October 8th 2019 at the University of South Australia.
Gesture recognition technology allows for control of devices through hand and body motions. It works by using cameras, sensors and algorithms to interpret gestures and movements. Key applications include controlling smart TVs with hand motions, sign language translation, and assisting disabled individuals. Challenges include variations between individuals, reading motions accurately due to lighting and noise, and lack of standardized gesture languages.
Augmented reality enhances one's current perception of reality by superimposing computer-generated images over a user's view of the real world. The goal of AR is to enhance performance and perception while making it difficult to distinguish between real and virtual elements. AR works by adding virtual objects to real world scenes and potentially removing real world objects. Key components include devices that can project virtual enhancements onto the real world. Applications span industries like aviation, business, education, and healthcare. While AR augments reality, virtual reality aims to replace it with a fully immersive computer-generated environment. AR may become widely used in daily life through new interaction interfaces.
Lecture 12 in the COMP 4010 course on AR/VR. This lecture was about research directions in AR/VR and in particular display research. This was taught by Mark Billinghurst on September 26th 2021 at the University of South Australia.
The document discusses hand gesture recognition. It defines what gestures are and how gesture recognition works by interpreting human gestures through mathematical algorithms. This allows humans to interact with machines naturally without devices. Examples of applications include controlling a smart TV with hand movements and using gestures for gaming. The document outlines the hardware and software needed for gesture recognition, including a webcam, processor, RAM, and operating system. It also provides an overview of the module structure involved in identifying and applying gestures as inputs.
The final lecture in the 2021 COMP 4010 class on AR/VR. This lecture summarizes some more research directions and trends in AR and VR. This lecture was taught by Mark Billinghurst on November 2nd 2021 at the University of South Australia
There are three main types of authentication: something you know, something you have, and something you are. Biometrics uses biological and behavioral characteristics to identify individuals, such as fingerprints, iris patterns, voice, gait, and signatures. Some common biometric technologies are fingerprint, face, iris, vein, voice, and signature recognition. Biometrics can be used for applications like access control, time/attendance tracking, airports, ATMs, and more. While biometrics provide security benefits, they also have disadvantages like cost, accuracy issues, and privacy concerns. The field continues to evolve as costs decrease and convenience increases.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
This document discusses empathic computing and collaborative immersive analytics. It notes that while fields like scientific and information visualization are well established, little research has looked at collaborative visualization specifically. Collaborative immersive analytics combines mixed reality, visual analytics and computer-supported cooperative work. Empathic computing aims to develop systems that allow sharing experiences, emotions and perspectives using technologies like virtual and augmented reality with physiological sensors. Applying these concepts could enhance communication and understanding for collaborative immersive analytics tasks.
This document discusses gesture recognition. It begins by introducing gesture recognition and its evolution from graphical user interfaces using mice and keyboards. It then defines different types of gestures including iconic, deictic, metaphoric, and beat gestures. The document outlines the basic working of a gesture recognition system and different types of gesture sensing technologies like hand gesture recognition, facial gesture recognition, sign language recognition, and vision-based techniques. It discusses input devices used for gesture tracking and various applications of gesture recognition like socially assistive robotics, sign language translation, virtual controllers, and remote control. Finally, it addresses challenges in gesture recognition like lack of a universal gesture language and issues with robustness.
BCI or DNI is a direct communication pathway between an enhanced or wired brain and an external device. DNIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
This PowerPoint presentation discusses multitouch interaction technology. It provides an overview of hardware, software, user interfaces, market applications, gesture types, and implementation of multitouch. It describes several touch screen technologies including capacitive, resistive, surface acoustic wave, infrared, and optical. Examples of multitouch gestures like tap, pan, pinch zoom are presented. Current and future uses and markets of multitouch include interactive displays, tables, mobile devices. Research continues to enhance multitouch with 3D interaction and larger surfaces.
This document provides an introduction to extended reality technologies from Mark Billinghurst, the director of the Empathic Computing Lab at the University of South Australia. It outlines Billinghurst's background and research interests. It then provides an overview of the class, including assignments, equipment available, and the lecture schedule. The lecture schedule covers topics such as augmented reality, virtual reality, the metaverse, and the history of AR/VR.
Lecture 8 of the COMP 4010 course taught at the University of South Australia. This lecture provides and introduction to VR technology. Taught by Mark Billinghurst on September 14th 2021 at the University of South Australia.
A lecture on VR systems and graphics given as part of the COMP 4026 AR/VR class taught at the University of South Australia. This lecture was taught by Bruce Thomas on August 20th 2029.
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
Finger print sensor and its applicationArnab Podder
This document presents a seminar on fingerprint sensors and their applications. It discusses the overview and working of fingerprint sensors, including the different types of fingerprint patterns and sensors. Some key applications of fingerprint sensors mentioned are voter registration, border control, device security, and digital payments. The document outlines advantages such as security and ease of use, as well as disadvantages relating to image quality. It concludes by discussing future applications of fingerprint authentication in areas like banking and government services.
COMP 4010 Lecture 6 on Virtual Reality. This time focusing on Interaction Design for VR and rapid prototyping tools. Taught by Bruce Thomas at the University of South Australia on September 3rd 2019. Slides by Mark Billinghurst
This document discusses interaction design principles and processes for designing virtual reality interfaces. It begins by defining interaction design and discussing needs analysis methods like learning from users, analogous settings, and experts. Ideation techniques like brainstorming and sketching VR interfaces are presented. Design considerations like affordances, metaphors, and physical ergonomics are covered. Prototyping tools like Sketchbox, A-Frame and Unity EditorVR are introduced. The document concludes by discussing evaluation methods like usability testing and field studies.
Augmented Reality in Multi-Dimensionality: Design for Space, Motion, Multiple...Shalin Hai-Jew
Augmented reality (AR)—the use of digital overlays over physical space—manifests in a wide range of spaces (indoor, outdoor; virtual) and ways (in real space (with unaided human vision); in head gear; in smart glasses; on mobile devices, and others). There are various authoring technologies that enable the making of AR experiences for various users. This work uses a particular tool (Adobe Aero®) to explore ways to build AR for multiple dimensions, including the fourth dimension (motion, changes over time).
Based on the respective purposes of the AR experience, some basic heuristics are captured for
space design (1),
motion design (2),
multiple perception design (sight, smell, taste, sound, touch) (3),
and virtual- and tangible- interactivity (4).
Lecture 9 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture describes principles for effective Interface Design for Mobile AR applications. Look for the other 9 lectures in the course.
This document discusses various techniques for prototyping augmented reality interfaces, including sketching, storyboarding, wireframing, mockups, and video prototyping. Low-fidelity techniques like sketching and paper prototyping allow for rapid iteration and exploring interactions at early stages. Higher-fidelity techniques like interactive mockups and video prototypes communicate the look and feel of the final product and allow for user testing. A variety of tools are presented for different stages of prototyping, from sketching and interactive modeling in VR, to scene assembly using drag-and-drop tools, to final mockups using design software. Case studies demonstrate applying these techniques from initial concepts through to higher-fidelity prototypes. Overall the document
This document summarizes a lecture on interaction design for augmented reality. It discusses several types of AR interfaces including: (1) AR information browsers that allow viewing and manipulating virtual content registered in the real world, (2) 3D AR interfaces that allow interacting with and manipulating 3D virtual objects, and (3) tangible interfaces that use physical objects to interact with and control virtual objects. It also presents case studies of specific AR applications and discusses design principles for AR interaction including using physical affordances, feedback, and natural mappings.
Learning The Rules to Break Them: Designing for the Future of VRMichael Harris
The VR developer space is riddled with a myriad of design guides, advice, and prohibitions. This talk will provide a survey of the current state of best practices for VR design and discuss how this new human-computer interface provides unique opportunities and challenges for designers. With three years experience developing for every commercially available VR and AR platform, the speaker will also address some unique lessons learned experimenting with this new space and discuss how bending or breaking these emerging design paradigms might unlock exciting new possibilities for the future of VR interfaces. By the end of this talk, participants will have:
Explored the extent to which VR interfaces relate to and differ from more traditional human-computer interfaces.
Received a comprehensive overview and analysis of current emerging VR design paradigms.
Explore the potential for the future of VR interfaces through the practical experiences gained from several years spent in VR design.
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
Lecture 11 from the 2017 COMP 4010 course on AR and VR at the University of South Australia. This lecture was on AR applications and was taught by Mark Billinghurst on October 26th 2017.
Lecture 6 on the COMP4010 course on AR/VR. This lecture describes prototyping tools for developing interactive prototypes for AR experiences. The lecture was taught on August 31st 2020 by Mark Billinghurst at the University of South Australia
VSMM 2016 Keynote: Using AR and VR to create Empathic ExperiencesMark Billinghurst
Keynote talk given by Mark Billinghurst at the VSMM 2016 conference on October 19th 2016.This talk was about how AR and VR can be used to create Empathic Computing experiences.
Some of my recent research topics at the Meta-Perception group at the Ishikawa-Watanabe laboratory (http://www.k2.t.u-tokyo.ac.jp/index-e.html)
- The Physical Cloud
- Zero-delay, Zero-mismatch spatial AR with Laser Sensing Display
- Augmented Perception
(Link to videos in the comments)
COMP 4010 Lecture12 - Research Directions in AR and VRMark Billinghurst
COMP 4010 lecture on research directions in AR and VR, taught by Mark Billinghurst on November 2nd 2017 at the University of South Australia. This is the final lecture in the 2017 COMP 4010 course on AR and VR
Keynote speech given by Mark Billinghurst at the ISS 2022 conference. Presented on November 22nd, 2022. This keynote outlines some research opportunities in the Metaverse.
Lecture 2 in the 2022 COMP 4010 Lecture series on AR/VR and XR. This lecture is about human perception for AR/VR/XR experiences. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Keynote talk by Mark Billinghurst at the 9th XR-Metaverse conference in Busan, South Korea. The talk was given on May 20th, 2024. It talks about progress on achieving the Metaverse vision laid out in Neil Stephenson's book, Snowcrash.
These are slides from the Defence Industry event orgranized by the Australian Research Centre for Interactive and Virtual Environments (IVE). This was held on April 18th 2024, and showcased IVE research capabilities to the South Australian Defence industry.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Presentation given by Mark Billinghurst at the 2024 XR Spring Summer School on March 7 2024. This lecture talks about different evaluation methods that can be used for Social XR/AR/VR experiences.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Empathic Computing: Capturing the Potential of the MetaverseMark Billinghurst
This document discusses empathic computing and its relationship to the metaverse. It defines key elements of the metaverse like virtual worlds, augmented reality, mirror worlds, and lifelogging. Research on the metaverse is still fragmented across these areas. The document outlines a vision for empathic computing systems that allow sharing experiences, emotions, and environments through technologies like virtual reality, augmented reality, and sensor data. Examples are given of research projects exploring collaborative VR experiences and AR/VR systems for remote collaboration and communication. The goal is for technology to support more natural and implicit understanding between people.
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
The document discusses using virtual avatars to improve remote collaboration. It provides background on communication cues used in face-to-face interactions versus remote communication. It then discusses early experiments using augmented reality for remote conferencing dating back to the 1990s. The document outlines key questions around designing effective virtual bodies for collaboration and discusses various technologies that have been developed for remote collaboration using augmented reality, virtual reality, and mixed reality. It summarizes several studies that have evaluated factors like avatar representation, sharing of different communication cues, and effects of spatial audio and visual cues on collaboration tasks.
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
1) The document discusses the concept of empathic computing and its application to designing for the broader metaverse.
2) Empathic computing aims to develop systems that allow people to share what they are seeing, hearing, and feeling with others through technologies like augmented reality, virtual reality, and physiological sensors.
3) Potential research directions are explored, like using lifelogging data in VR, bringing elements of the real world into VR, and developing systems like "Mini-Me" avatars that can convey non-verbal communication cues to facilitate remote collaboration.
Lecture 6 of the COMP 4010 course on AR/VR. This lecture is about designing AR systems. This was taught by Mark Billinghurst at the University of South Australia on September 1st 2022.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
This document discusses augmented reality technology and visual tracking methods. It covers how humans perceive reality through their senses like sight, hearing, touch, etc. and how virtual reality systems use input and output devices. There are different types of visual tracking including marker-based tracking using artificial markers, markerless tracking using natural features, and simultaneous localization and mapping which builds a model of the environment while tracking. Common tracking technologies involve optical, magnetic, ultrasonic, and inertial sensors. Optical tracking in augmented reality uses computer vision techniques like feature detection and matching.
This document discusses how metaverse concepts can be applied to corporate learning and leadership development. It defines the metaverse and outlines its key components: virtual worlds, augmented reality, mirror worlds, and lifelogging. Traditional corporate learning is described as instructor-led, group-based, and discrete. The document proposes applying metaverse concepts like learning in the flow of work, just-in-time learning, and adaptive personalized learning. Specific applications explored are virtual reality for skills and soft skills training, augmented reality for hands-on training, lifelogging for adaptive training, and mirror worlds for capturing real-world tasks.
Empathic Computing: Developing for the Whole MetaverseMark Billinghurst
A keynote speech given by Mark Billinghurst at the Centre for Design and New Media at IIIT-Delhi. Given on June 16th 2022. This presentation is about how Empathic Computing can be used to develop for the entre range of the Metaverse.
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
4. The Incredible Disappearing Computer
1960-70’s
Room
1970-80’s
Desk
1980-90’s
Lap
1990-2000’s
Hand
2010 -
Head
5. Graphical User Interfaces
• Separation between real and digital worlds
• WIMP (Windows, Icons, Menus, Pointer) metaphor
6. Rekimoto, J. and Nagao, K. 1995. The world through the computer: computer augmented interaction with real world environments.
Making Interfaces Invisible
(c) Internet of Things
7. Internet of Things (IoT)..
• Embed computing and sensing in real world
• Smart objects, sensors, etc..
(c) Internet of Things
8. Virtual Reality (VR)
• Users immersed in Computer Generated environment
• HMD, gloves, 3D graphics, body tracking
9. Augmented Reality (AR)
• Virtual Images blended with the real world
• See-through HMD, handheld display, viewpoint tracking, etc..
10. Milgram’s Mixed Reality (MR) Continuum
Augmented Reality Virtual Reality
Real World Virtual World
Mixed Reality
"...anywhere between the extrema of the virtuality continuum."
P. Milgram and A. F. Kishino, (1994) A Taxonomy of Mixed Reality Visual Displays
Internet of Things
15. Creating the Illusion of Reality
• Fooling human perception by using
technology to generate artificial sensations
• Computer generated sights, sounds, smell, etc
16. Reality vs. Virtual Reality
• In a VR system there are input and output devices
between human perception and action
17. Example: Birdly - http://www.somniacs.co/
• Create illusion of flying like a bird
• Multisensory VR experience
• Visual, audio, wind, haptic
20. A Human Information Processing Model
• High level staged model from Wickens and Carswell (1997)
• Relates perception, cognition, and physical ergonomics
Perception Cognition Ergonomics
21. 1. Design for Perception
• Need to understand perception to design AR/VR systems
• Visual perception
• Many types of visual cues (stereo, oculomotor, etc.)
• Auditory system
• Binaural cues, vestibular cues
• Somatosensory
• Haptic, tactile, kinesthetic, proprioceptive cues
• Chemical Sensing System
• Taste and smell
26. 2. Design for Cognition
• Design for Working and Long-term memory
• Working memory
• Short term storage, Limited storage (~5-9 items)
• Long term memory
• Memory recall trigger by associative cues
• Situational Awareness
• Model of current state of user’s environment
• Used for wayfinding, object interaction, spatial awareness, etc..
• Provide cognitive cues to help with situational awareness
• Landmarks, procedural cues, map knowledge
• Support both ego-centric and exo-centric views
27. Design for Micro Interactions in AR
▪ Design interaction for less than a few seconds
• Tiny bursts of interaction
• One task per interaction
• One input per interaction
▪ Benefits
• Use limited input
• Minimize interruptions
• Reduce attention fragmentation
28. Make it Glanceable
• Seek to rigorously reduce information density. Successful designs afford for
recognition, not reading.
Bad Good
29. Reduce Information Chunks
You are designing for recognition, not reading. Reducing the total # of information
chunks will greatly increase the glanceability of your design.
1
2
3
1
2
3
4
5 (6)
Eye movements
For 1: 1-2 460ms
For 2: 1 230ms
For 3: 1 230ms
~920ms
Eye movements
For 1: 1 230ms
For 2: 1 230ms
For 3: 1 230ms
For 4: 3 690ms
For 5: 2 460ms
~1,840ms
30. Navigation
• How we move from place to place within an environment
• The combination of travel with wayfinding
• Wayfinding: cognitive component of navigation
• Travel: motor component of navigation
31. Wayfinding – Making Cognitive Maps
• Goal of Wayfinding is to build Mental Model (Cognitive Map)
• Types of spatial knowledge in a mental model
• landmark knowledge, procedural knowledge, map-like knowledge
• Creating a mental model
• study a map, explore the space, explore a copy of the space
• Problem: Sometimes perceptual judgments are incorrect within VR
33. Support Wayfinding in VR
• Provide Landmarks
• An obvious, distinct and non-mobile object
• Seen from several locations (e.g. tall)
• Audio beacons can also serve as landmarks
• Use Maps
• Copy real world maps
• Ego-centric vs. Exocentric map cues
• World in Miniature
• Map based navigation
34. Situation Awareness: Ego-centric and Exo-centric views
• Combining ego-centric and exo-centric cues for better situational awareness
36. 3. Design for Ergonomics
• Design for the human motion range
• Consider human comfort and natural posture
• Design for physical input
• Coarse and fine scale motions, gripping and grasping
• Avoid “Gorilla arm syndrome” from holding arm pose
37. Gorilla Arm in AR/VR
• Design interface to reduce mid-air gestures
39. XRgonomics
• Uses physiological model to calculate ergonomic interaction cost
• Difficulty of reaching points around the user
• Customizable for different users
• Programmable API, Hololens demonstrator
• GitHub Repository
• https://github.com/joaobelo92/xrgonomics
Evangelista Belo, J. M., Feit, A. M., Feuchtner, T., & Grønbæk, K. (2021, May). XRgonomics: Facilitating the Creation of
Ergonomic 3D Interfaces. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-11).
42. New Tools for Human Factors
• New types of sensors
• EEG, ECG, GSR, etc
• Sensors integrated in HMD
• Integrated into HMD faceplate, straps
• Data processing and capture tools
• iMotions, etc
• AR/VR Analytics tools
• Cognitive3D, etc
43. Project Galea: Multiple Physiological Sensors in HMD
• Incorporate range of sensors in HMD faceplate and over head
• EMG – muscle movement
• EOG – Eye movement
• EEG – Brain activity
• EDA, PPG – Heart rate
44.
45. Cognitive3D
• Capture capture and analytics for VR
• Multiple sensory input (eye tracking, HR, EEG, body movement, etc)
47. Example: Adaptive VR based on Workload
• VR training systems adapt in
real-time based on cognitive load
• Goal to induce the best level of
performance gain
Dey, A., Chatburn, A., & Billinghurst, M. (2019, March). Exploration of an EEG-
based cognitively adaptive training system in virtual reality. In 2019 IEEE
Conference on Virtual Reality and 3d User Interfaces (VR) (pp. 220-226). IEEE.
49. Experimental Task
● Search task
○ Search multiple times in 5 minutes
● Target selection increasing difficult
○ number of objects, different colors
○ shapes, and movement
Increasing levels (0 - 20)
53. Results – Response Time
Increasing levels of difficulty
Response Time (sec.)
No difference between
easiest and hardest levels
54. Results – Time Frequency Representation
● Task Load
○ Significant alpha synchronisation in the hardest difficulty levels
of the task when compared to the easiest difficulty levels
Easiest Hardest Difference
55. Lessons Learned
● Similar reaction time but increased brain activity showing
increased cognitive effort at higher levels to sustain performance
● Adaptive VR training can increase the user’s cognitive load
without affecting task performance
● First demo of the use of real-time EEG signals to adapt the
complexity of the training stimuli in VR
56. • AR/VR makes the computer invisible
• Altering human perception
• Using Human Information Processing Model for Design
• Consider Perception, Cognition, Ergonomic elements
• New tools becoming available
• Physiological sensors, sensor enhanced HMDs
• Data collection, analytics software
• Directions for Research
• New models, application/validation studies, novel interfaces
Conclusions