Patent classifications
H04N13/302
STEREOSCOPIC IMAGE DISPLAY DEVICE
According to one embodiment, a stereoscopic image display device includes a three-dimensional pixel unit, a backlight, and an arithmetic/control circuit. The three-dimensional pixel unit includes a plurality of pixel cells that are formed of an optical material having electrically changeable optical characteristics, are arranged in a mutually separated manner and in a three-dimensional manner, and are electrically connected with transparent wiring patterns. The backlight is configured to emit illumination light to the three-dimensional pixel unit. The arithmetic/control circuit is configured to control the plurality of pixel cells individually via the wiring patterns on the basis of input three-dimensional image data to cause the three-dimensional pixel unit to function as a transmissive hologram.
STEREOSCOPIC IMAGE DISPLAY DEVICE
According to one embodiment, a stereoscopic image display device includes a three-dimensional pixel unit, a backlight, and an arithmetic/control circuit. The three-dimensional pixel unit includes a plurality of pixel cells that are formed of an optical material having electrically changeable optical characteristics, are arranged in a mutually separated manner and in a three-dimensional manner, and are electrically connected with transparent wiring patterns. The backlight is configured to emit illumination light to the three-dimensional pixel unit. The arithmetic/control circuit is configured to control the plurality of pixel cells individually via the wiring patterns on the basis of input three-dimensional image data to cause the three-dimensional pixel unit to function as a transmissive hologram.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing apparatus (30) includes: an estimation unit (33B) that estimates the crosstalk amount based on a relative positional relationship between a viewing position of a viewer of a display device (10) and a pixel a screen of the display device (10); and a generation unit (33C) that generates an image to be displayed on the display device (10) by correcting a value of each of a plurality of pixels of the screen based on the crosstalk amount.
DEVICE TO CREATE AND DISPLAY FREE SPACE HOLOGRAM
A unique method and a device to generate free space “pop-out” & “sink-in” holograms is disclosed herein. The hologram disclosed herein does not use any special medium, mirrors, reflective screens or wearables such as headgear & special glasses. The hologram disclosed herein can be created in free space, outer space or in air, without any other optical components except for the special display screen of the hologram device. This device demonstrates a free space hologram and the hologram Augmented Reality & hologram Virtual Reality. A camera capable of hologram quality images equipped with a smart lens which mimics the human eye by changing it's lens aperture according to the light intensity as the pupil of the human eye and focus & capture “pop-out” & “sink-in” hologram images is disclosed herein. The audio which is incorporated with the device provide multi dimensional multi directional audio effects.
DEVICE TO CREATE AND DISPLAY FREE SPACE HOLOGRAM
A unique method and a device to generate free space “pop-out” & “sink-in” holograms is disclosed herein. The hologram disclosed herein does not use any special medium, mirrors, reflective screens or wearables such as headgear & special glasses. The hologram disclosed herein can be created in free space, outer space or in air, without any other optical components except for the special display screen of the hologram device. This device demonstrates a free space hologram and the hologram Augmented Reality & hologram Virtual Reality. A camera capable of hologram quality images equipped with a smart lens which mimics the human eye by changing it's lens aperture according to the light intensity as the pupil of the human eye and focus & capture “pop-out” & “sink-in” hologram images is disclosed herein. The audio which is incorporated with the device provide multi dimensional multi directional audio effects.
Systems and methods for a lensed display
A lensed display system includes a projector configured to project an image. The lensed display system includes a lens formed of a glass or crystalline material. The lens includes a first surface, wherein at least a portion of the first surface includes a coating that is configured to display the projected image. The lens includes a second surface, wherein the second surface comprises a transparent curved surface that is configured to face toward a user and to enable the user to view the image projected onto the coating through the transparent curved surface.
Systems and methods for a lensed display
A lensed display system includes a projector configured to project an image. The lensed display system includes a lens formed of a glass or crystalline material. The lens includes a first surface, wherein at least a portion of the first surface includes a coating that is configured to display the projected image. The lens includes a second surface, wherein the second surface comprises a transparent curved surface that is configured to face toward a user and to enable the user to view the image projected onto the coating through the transparent curved surface.
SOUND-GENERATING DEVICE, DISPLAY DEVICE, SOUND-GENERATING CONTROLLING METHOD, AND SOUND-GENERATING CONTROLLING DEVICE
A sound-generating device, a display device, a sound-generating controlling method, and a sound-generating controlling device are provided. The sound-generating device includes: a reflection plate which includes a first sound wave reflection face arranged towards a first direction; a plurality of main loudspeakers, the plurality of main loudspeakers are distributed in an array in a preset three-dimensional space, and the preset three-dimensional space is located at one side of the first sound wave reflection face towards a first direction; the plurality of main loudspeakers include first main loudspeakers with a sound-generating direction towards the first direction, and second main loudspeakers with a sound-generating direction towards a second direction, the second direction is an opposite direction of the first direction; and the sound waves emitted by the second main loudspeakers are transmitted to the first sound wave reflection face and can be reflected by the first sound wave reflection face.
SOUND-GENERATING DEVICE, DISPLAY DEVICE, SOUND-GENERATING CONTROLLING METHOD, AND SOUND-GENERATING CONTROLLING DEVICE
A sound-generating device, a display device, a sound-generating controlling method, and a sound-generating controlling device are provided. The sound-generating device includes: a reflection plate which includes a first sound wave reflection face arranged towards a first direction; a plurality of main loudspeakers, the plurality of main loudspeakers are distributed in an array in a preset three-dimensional space, and the preset three-dimensional space is located at one side of the first sound wave reflection face towards a first direction; the plurality of main loudspeakers include first main loudspeakers with a sound-generating direction towards the first direction, and second main loudspeakers with a sound-generating direction towards a second direction, the second direction is an opposite direction of the first direction; and the sound waves emitted by the second main loudspeakers are transmitted to the first sound wave reflection face and can be reflected by the first sound wave reflection face.
Layered scene decomposition codec system and methods
A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.