2016 Program

   Special Topics


This special topic will cover the technologies and applications in the emerging area of augmented and virtual reality (AR/VR). The sessions will bring together scientists, engineers, business professionals, market analysts, and industry leaders involved in AR/VR technologies, products, applications, advanced developments, and emerging trends.

AR/VR DISPLAY SYSTEMS I

Tuesday, May 24

11:10 am–12:30 pm

Room 103

Chair:
N. Balram
Ricoh Innovations Corp., Mountain View, CA, USA
Co-Chair:
W. L. Hendrick
Rockwell Collins Optronics, Carlsbad, CA, USA
Session3.1


A Multi-Plane Volumetric Optical See-Through Head-Mounted 3D Display

An optical see-through head-mounted display (HMD) employing a multi-plane volumetric technique for augmented-reality applications has been designed.By using a stack of fast-switching polymer-stabilized liquid-crystal scattering shutters, the HMD provides correct depth information and solves the accommodation-vergence conflict.

S. Liu,
Y. Li,
X. Li,
P. Zhou,
N. Rong,
Y. Yuan,
S. Huang,
W. Lu
Y. Su
Shanghai Jiao Tong University, Shanghai, P.R. China

Session 3.2


Near-to-Eye Waveguide Display Based on Holograms

An optical waveguide with couplers based on volume holograms provides a promising solution for compact near-to-eye displays due to its light weight and the multiple degrees of design freedom of the holograms. Several examples and prototypes of near-to-eye waveguide displays based on holograms will be presented.

J. Han,
J. Liu,
Y. Wang
Beijing Institute of Technology, Beijing, P. R. China

Session3.3


Study on the Field-of-View Properties for a Holographic Waveguide Display System

The field-of-view (FOV) properties and effects of a reflecting volume holographic grating (VHG) in conical mounting were studied in detail when used in an optical see-through head-mounted display. Also, two methods to enlarge the FOV and improve the color uniformity for a reflecting VHG were analyzed and simulated.

Y. Weng,
Y. Zhang,
X. Li
Southeast University, Nanjing, P. R. China

Session3.4


Switchable Lens for 3D Displays, Augmented Reality, and Virtual Reality

A switchable lens based on a twisted-nematic liquid-crystal cell and a polarizationdependent component will bepresented. This switchable lens has advantages in fast response time, low chromatic aberration, and low operation voltage. Its potential applications include wearable virtual reality, augmented reality, and other head-mounted-display devices.

Y.-H. Lee,
F. Peng,
S.-T. Wu
University of Central Florida, Orlando, FL, USA

AR/VR DISPLAY SYSTEMS II

Tuesday, May 24

2:00–3:20 pm

Room 103

Chair:
W. Cummings
Microsoft, Clinton, WA, USA
Co-Chair:
K. Käläntär
Global Optical Solutions, Tokyo, Japan
Session 8.1


Invited Paper:
The Avegant Glyph: Optical Design Considerations and Approach to Near-to-Eye Displays

The Avegant Glyph is a new near-to-eye display product designed for mobile media consumption that combines audio, a 3D display, head tracking, and other features. The evolution of the optical microdisplay system, including novel illumination and projection optics that allow two complete projection engines to fit within the headband of the Glyph, will be described.

D. S. Dewald
Ergo Engineering, Addison, TX, USA

A. T. Evans,
N. Welch,
A. Gross,
G. Hill
Avegant Corp., Redwood City, CA, USA

Session 8.2


Invited Paper:
Hyper-Realistic Head-Up-Display Systems for Medical Application

Augmented-reality technologies such as head-up displays (HUDs) can be significant and useful for medical applications; therefore several technologies such as the head-dome-projector monocular HUD have been developed. A HUD for the magnetic-resonance imaging system has been developed to reduce patients anxiety during the examination.

T. Sasaki,
A. Hotta,
T. Murata,
S. Uehara,
H. Okumura
Toshiba Corporate R&D Center, Kawasaki, Japan

Session 8.3


Hybrid Modulation for Near-Zero Display Latency

Binary displays for virtual reality can achieve low latency by integrating view tracking with modulation. A hybrid modulator applied to an AMOLED display at an update rate of 1.7 kHz will be discussed. The perceived image was observed to have nearly zero latency with minimal gray-scale artifacts.

T. Greer,
J. Spjut,
D. Luebke,
T. Whitted
NVIDIA Research, Durham, NC, USA

Session 8.4


Invited Paper:
Pixels towards Pixies:Post-Multimedia Interactions with Air-Based Media

A method for realizing a new expression of computer graphics that expands their malleability towards physical materials in the real world is proposed. The method utilizes light fields and acoustic fields that are calculated, generated, and controlled by computers. The results, such as new aerial haptic interaction,aerial touch displays, 3D manipulation of objects, new material expression displays, etc., will be discussed. The resulting system can be applied to display technologies such as computer-generated graphics, entertainment computing, and human interfaces. Finally, the advantages and limitations of the method will be discussed.

Y. Ochiai
University of Tsukuba, Ibaraki, Japan

WEARABLE AR/VR APPLICATIONS

Tuesday, May 24

3:40–5:10 pm

Room 103

Chair:
S. Jones
Nulumina Corp., Newcastle, WA, USA
Co-Chair:
L. Palmateer
Rovi Corp.,
San Francisco, CA, USA
Session 14.1


Augmented-Reality and Virtual-Reality Smart
Eyewear: Forecasts for the Next Decade

The market for near-to-eye computers (including AR and VR smart glasses) is projected to be approximately $30 billion in 2026. How the market for eye-worn wearables is expected to evolve in the next decade will be focused on.

H. Zervos
IDTechEx, Inc.,
Boston, MA, USA

Session 14.2


Invited Paper:
Enabling Technologies for Wearable Smart Headsets

To be widely adopted, smart headsets should be fashionable, comfortable, and provide useful apps. Technologies that are necessary for smart headsets but not available from smart phones, including microdisplays, small optics, high-energy-density batteries, and a reliable voice interface, will be discussed.

H. K. Choi
Kopin Corp., Westborough, MA, USA

Session 14.3


Eyeglasses-Type Wearable Device Using a Multi-Mirror Array

An easy-to-wear eyeglasses-type wearable device that utilizes a multi-mirror array (MMA) to reflect the image, transmitted from a small projection unit within the frame, to the wearer’s eye has been developed. It provides the wearer with a digital image while maintaining clear visibility and is light and comfortable to minimize fatigue.

T. Tsuruyama,
S. Uehara,
M. Baba
Toshiba Corp., Kawasaki, Japan

Session 14.4


Invited Paper:
A Diffractive LCD Backlight Approach to Dynamic Light-Field Displays

A novel approach to LCD backlighting that allows for the generation of dynamic light fields through a single LCD panel is introduced. The backlight is based on diffractive nanostructures that are 90% transparent. Full control over the spatio-angular parameters of the light field has been achieved, providing new opportunities for the design of naked-eye 3D displays, head-up displays, and near-to-eye AR/VR systems.

F. Aieta,
S. Vo,
M. Ma,
A. Niederberger,
D. Fattal
Leia, Inc., Menlo Park, CA, USA

Session 14.5


Late-News Paper:
Retinal Imaging Laser Eyewear with Focus-Free and Augmented Reality

Retinal Imaging Laser Eyewear contains a miniature laser projector inside the frame which provides the wearer with digital image information through the pupil using the retina as a screen. This compact universal-design eyewear features a “focus-free” and "augmented-reality" image independent of the wearers’ visual acuity and point of focus.

M. Sugawara,
M. Suzuki,
N. Miyauchi
QDLaser, Inc., Kanagawa, Japan

MIXED-REALITY APPLICATIONS

Wednesday, May 25

9:00am –10:20 am

Room 103

Chair:
A. Abileah
Adi-Displays Consulting LLC, Portland, OR, USA
Co-Chair:
P. Coni
THALES Avionics, Le Haillan, France
Session 21.1


3D Multitouch and Connected Displays for the Future Interactive and Collaborative Display Systems

The latest results and investigation which brings together intuitive 3D multitouch interaction, immersive visualization, and advanced collaboration capabilities will be summarized. Such results provide solutions to better understand 2D and 3D data and enhance the collective perception of digital information,which is of the utmost importance for many industries.

J.-B. de la Rivière,
J. Castet
Immersion SA, Bordeaux, France

Session 21.2


Exploring 3D Interactive Performance Animation for VR/AR Applications Using Low-Cost Motion Capture

In virtual-reality/augmented-reality design scenarios, high functionality with versatile interaction is expected. A robust 3D interactive approach has been investigated by using a low-cost motion-capture device. The approach combines hand-motion capture, interactive locomotion control, and user-editing animation. This inspires various designs for consumer-level VR/AR applications.

Y. Peng,
W. Heidrich
University of British Columbia, Vancouver, BC, Canada

C. Su
Zhejiang University, Hangzhou, P.R. China

Session 21.3


A 3D Interactive System Based on Vision Computing of Direct-Flective Cameras

A bare-finger 3D air-touch interactive technology for portable devices has been developed. By using a directive-flective optical design on camera modules, the interactive range is workable from 1.5 to 50 cm above the entire surface of the display. Moreover, the motion vision computing, different from skin-color detection,to determine the position of fingertips will be discussed. The position mean errors were less than 1 cm. This accuracy realizes a camera-based interactive system allowing for near-distance 3D air-touch functionality. Therefore, floating 3D images can be touched and interacted with, potentially creating a more applicable and intuitive human-machine interface.

X. Li,
C.-H. Chen,
Y.-C. Hsu,
Y.-P. Huang
National Chiao Tung University, Hsinchu, Taiwan, ROC

Session 21.4


Portable Reference Images (PRI) for Augmented-Reality and Virtual-Reality Displays

As AR/VR migrates to smaller form factors with limited storage and compute power, the scene data driving the displays must be stored compactly and processed rapidly to generate positive user experiences. The Portable Reference Image (PRI) is a product specification developed to provide fluid visualization within these dynamic constraints.

K. A. Abeloe,
J. Berglund
Integrity Applications, Inc., Carlsbad, CA, USA

B. O’Neal
Naval Air Weapons Center Weapons Division, China Lake,CA, USA

AUGMENTED REALITY AND VIRTUAL REALITY

Wednesday, May 25

10:40am –11:40 am

Room 103

Chair:
A. Bhowmik
Intel Corp., Santa Clara, CA, USA
Session 28.1


Invited Paper:
Why Focus Cues (Blur and Accommodation) Matter

Stereoscopic displays present different images to the two eyes and thereby create a compelling 3D sensation. However, such displays cause a host of perceptual and ergonomic problems. These problems occur because some of the presented depth cues (i.e., perspective and binocular disparity) specify the intended 3D scene while focus cues (blur and accommodation) specify the fixed distance of the display itself. These problems are particularly problematic in VR and AR displays. A stereoscopic display that circumvents these problems has been developed. By using the display, how incorrect focus cues affect visual perception, visual performance, and, most importantly, visual comfort were investigated. The ability to perceive correct depth ordering is significantly improved when focus cues are correct. The ability to binocularly fuse stimuli is substantially improved when the vergence-accommodation conflict is minimized. The level of comfort is significantly increased when the vergenceaccommodation conflict is zero. Suggestions on ways to create nearly correct focus cues in practical stereoscopic displays, particularly VR and AR, will be presented.

M. S. Banks
University of California at Berkeley, Berkeley, CA, USA

Session 28.2


Invited Paper:
A Simple Method to Reduce Accommodation Fatigue in Virtual-Reality and Augmented-Reality Displays

Available head-mounted displays for augmented reality and virtual reality provide many of the cues that a viewer senses in the real world, but not the important accommodation cue. Approaches to solve this problem have been considered, but proposed solutions come at significant cost in terms of image quality and system complexity. A simple system, based on electronic liquidcrystal lenses, is proposed. The design and performance of applicable electronic lenses will be presented. A surprising result of the proposed design is the potential for a reduced system bandwidth.

P. Bos,
L. Li,
D. Bryant,
A. Jamali
Kent State University, Kent, OH, USA

A. Bhowmik
Intel Corp., Santa Clara, CA, USA

Session 28.3


Invited Paper:
Light Fields, Focus Tunable, and Monovision Near-to-Eye Displays

Emerging virtual- and augmented-reality (VR/VR) displays must overcome the prevalent issue of visual discomfort to provide comfortable user experiences.In particular, the mismatch between vergence and accommodation cues inherent to most stereoscopic displays has been a long-standing challenge. Several display modes that have the promise to mitigate visual discomfort caused by the vergence-accommodation conflict (VAC) were evaluated, and user comfort as well as performance in VR/AR applications was improved. The light-field, focus-tunable, and monovision display modes were investigated. The effectiveness of several different display modes enabled by light-field and focus-tunable near-to-eye displays was evaluated.

G. Wetzstein
Stanford University, Stanford, CA, USA

AUGMENTED REALITY AND VIRTUAL REALITY 3D-SENSING TECHNOLOGY

Wednesday, May 25

3:30pm –4:50 pm

Room 103

Chair:
A. Bhowmik
Intel Corp., Santa Clara, CA, USA
Session 35.1


Invited Paper:
Real-Time 3D-Sensing Technologies and Applications in Interactive and Immersive Devices

Recent advances in real-time 3D-sensing technologies provided by the Intel® RealSenseTM cameras and software development kits including a number of middleware libraries will be presented. A range of novel systems applications, including a new class of interactive and immersive mixed-reality devices, will be described.

A. K. Bhowmik
Intel Corp., Santa Clara, CA, USA

Session 35.2


Invited Paper:
RGB-D Image Understanding Using Supervision Transfer

Current advances in object recognition and scene understanding have been enabled by the availability of a large number of labeled images. Similar advances in RGB-D image understanding are hampered by the current lack of large labeled datasets in this domain. A new technique “cross-modal distillation” which enables transfer supervision from RGB to RGB-D datasets has been developed. Representations learned from labeled RGB images as a supervisory signal to train representations for depth images were used. A 6% relative gain in performance for object detection with RGB-D images was observed and a 20% relative improvement when only using the depth image was obtained.

S. Gupta,
J. Hoffman,
J. Malik
University of California at Berkeley, Berkeley, CA, USA

Session 35.3


Invited Paper:
Industrial Deployments of a Full-Featured Head-Mounted Augmented-Reality System and the Incorporation of a 3D-Sensing Platform

A deployment of the DAQRI SMART HELMET™ (DSH) into an industrial environment will be discussed. The deployment shows that data that was contextually filtered, categorized, and displayed in relevant physical space yielded the greatest benefit. Those findings as well as how the embedded Intel®RealSense™ camera technology can further refine the context of data will described.

P. Greenhalgh,
B. Mullins
DAQRI, Los Angeles, CA, USA

A. Grunnet-Jepsen,
A. K. Bhowmik
Intel Corp., Santa, Clara, CA, USA

Session 35.4


Invited Paper:
A Wide-Field-of-View Head-Mounted Display and Its Effects on Search Performance in Augmented Reality

The development of a wide-field-of-view (FOV) optical see-through (OST) headmounted display (HMD) has been a challenge for decades. Naturally, the actual effects of a wide FOV OST display on the perception of augmentations have not been widely studied either. A wide FOV head-mounted projective display (HMPD) using a hyperboloidal semi-transparent mirror and a semi-transparent retroreflective screen, called a wearable HHMPD, was previously proposed. The main features of the wearable HHMPD include a horizontal FOV of 110° and a visual acuity of around 20/200. By using the wearable HHMPD, a user study investigating the effects of a wide FOV in AR was performed. Results show that target discovery rates consistently drop with in-view labelling and increase with in-situ labelling as display angle approaches 100° of FOV. Past this point, the performances of the two view management methods begin to converge, suggesting equivalent discovery rates at approximately 130° of FOV. Results also indicate that users exhibited lower discovery rates for targets appearing in peripheral vision and that there is little impact of FOV on response time and mental workload. A summary of the two studies will be provided.

K. Kiyokawa
Osaka University, Osaka, Japan