Display Week 2015's Symposium includes a special track on Imaging Technologies and Applications, featuring invited papers covering the areas of imaging technologies, products, applications, advanced developments, and emerging trends. This focused track will bring together scientists, engineers, business professionals, market analysts, academics, and industry leaders pioneering the end-to-end chain of imaging to display technologies and applications. Admission to this track is included when purchasing admission to the symposium.
|
1.1
Light-Field Imaging
Kurt Akeley
Lytro
|
Commercial light-field imaging systems that support photography, depth analysis, and other applications are becoming available. THe development of light-field photography over the past century will be summarized and issues such as depth of field and final image resolution will be briefly described. Future photographic and imaging opportunities will be identified.
|
|
1.2
Switchable Liquid-Crystal Micro Lens Arrays for the Light-Field Camera Application
Yutaka Nakai
Toshiba Corp.
|
Two types of gradient-index liquid-crystal micro lens arrays (LC-MLAs) for light-field camera application is reported. The LC-MLAs enabled the capture of both compound eye images to be refocused on the various positions with its depth information and clear 2D images without reconstruction.
|
|
1.3
Interactive Systems and Applications Based on Depth-Imaging and 3D-Sensing Technology
Achintya K Bhowmik
Intel
|
Recent breakthrough developments in depth-imaging and 3D computer-vision techniques are enabling real-time acquisition, reconstruction, and understanding of the 3D environment. Key technologies spanning embedded sensors, algorithms, and system integration will be presented, and an array of immersive and interactive applications based on depth imaging and 3D sensing has been demonstrated.
|
|
1.4
Scene Understanding from RGB-D Images
Jitendra Malik
UC Berkeley
|
The objective of this work is to be able to align objects in an RGB-D image with 3D models from a library. The pipeline for this task involves detecting and segmenting objects and estimating coarse pose by using a convolutional neural network followed by the insertion of the rendered model in the scene.
|
|
2.1
On the Duality of Compressive Imaging and Display
Gordon Wetzstein
Stanford University
|
Light-field cameras and displays are being treated distinctly in the literature. Despite significant differences in the signal-processing tools employed, there is a natural duality between compressive cameras and displays. An intuitive interpretation of the optical systems and optimization schemes of modern compressive light-field imaging systems has been has been derived.
|
|
2.2
Image Systems Simulation
Joyce Farrell
Stanford University
|
A computational software environment for modeling the complete image-processing pipeline of an imaging system, including the spectral and spatial properties of scenes, image formation, sensor capture, and display rendering will be described. The extension of the software environment to model human optics and retinal image processing will be discussed.
|
|
2.3
Computational Diffractive Sensing and Imaging: Using Optics for Computing and Computing for Optics
David Stork
Rambus
|
Computational diffractive sensors and imagers eschew coventional lenses and curved mirrors and rely instead upon application-specific diffraction gratings affixed to CMOS image sensors. The non-conventional optical signals were processed to yield an image or some measurement of the visual scene (visual motion, point localization, barcode payload, face presence, etc.).
|
|
2.4
Rethinking the Imaging Chain for Energy-Efficient Privacy-Preserving Continuous Mobile Vision
Robert LiKamWa
Rice University
|
Current mobile imaging chains are ill-suited for wearable vision analytics due to their high power consumption and privacy concerns. An in-imager analog vision processor that exports a low-bandwidth irreversibly encoded signal, generating vision features before analog-to-digital conversion, is proposed.
|
|
3.1
The importance of Focus Cues in 3D Displays
Martin Banks
UC Berkeley
|
Stereoscopic displays present different images to each eye and thereby create a compelling three-dimensional (3D) sensation. In a series of experiments, how focus cues affect 3D shape perception, visual performance, and, most importantly, visual comfort have been investigated. Guidelines for minimizing these adverse effects are offered, and foreseeable display technologies that may eventually eliminate them altogether will be described.
|
|
3.2
A Multiview 3D Holochat System
David Fattal
LEIA
|
A lightweight holographic video chat system composed of a multi-camera system streaming live content onto a LEIA 64-view full-parallax 3D display will be presented. The system auto-calibrates the camera views to a simple target and is suitable for real-time communication over a peer-to-peer network. The zoom level and focal (zero-disparity) plane location of the system can be adjusted in the software and the depth of field can be adjusted by simple modification of the camera baseline.
|
|
3.3
Immersive Virtual Reality on the Desktop -- System Integration of a Stereoscopic Display and Image-Based Tracking System
Dave Chavez
zSpace
|
Essential elements of a virtual-reality system include a stereoscopic display that can deliver high-quality projections based on user head position, as well as some form of interaction. The delivery of a convincing and comfortable virtual-reality experience requires a minimal level of fidelity in tracking and display technology, which has only recently been achievable. The demands of the imaging system in the realization of a virtual-reality system will be discussed in the context of overall system comfort, performance, and value.
|
|
3.4
High-Dynamic-Range Imaging for Consumer Applications
Jim Helman
MovieLabs
|
A perceptually tuned electro-optic transfer function will be discussed, and why the studios chose it as the basis for home entertainment mastering and distribution of high-dynamic-range and wide-color-gamut video will also be discussed. The history of transfer functions from ones that depend on the camera opto-electric transfer functions to display referred ones based on the electro-optic characteristics of the mastering display and viewing environment will also be presented.
|
|