Patent classifications
H04N2013/405
DISPLAY CONTROL METHOD FOR 3D DISPLAY SCREEN, AND MULTI-VIEWPOINT 3D DISPLAY DEVICE
The present disclosure relates to the field of 3D images, and discloses a display control method for a multi-viewpoint 3D display screen, comprising: acquiring identity features of a user; and performing 3D display for the user when the identity features meet conditions. The method can conduct authentication about 3D display for the user, thereby solving a problem of a single 3D display mode, and improving flexibility of 3D display mode. The present disclosure further discloses a multi-viewpoint 3D display device, a computer-readable storage medium, and a computer program product.
TELEPRESENCE THROUGH OTA VR BROADCAST STREAMS
Techniques are described for expanding and/or improving the Advanced Television Systems Committee (ATSC) 3.0 television protocol in robustly delivering the next generation broadcast television services. Telepresence is provided through over-the-air (OTA) virtual reality (VR) broadcast streams.
CORRECTION OF A HALO IN A DIGITAL IMAGE AND DEVICE FOR IMPLEMENTING SAID CORRECTION
The object of the invention is a method (400) for correcting a halo (H) in a digital image (1) captured using photogrammetry in a 3-D modeling studio, the halo being generated through the interaction of light originating from a light source (L3, L4, L5, L6) in the studio with the optic of the shooting device, and manifesting as a local lightening of the digital image, the method comprising the steps of generating (410) a light intensity map (M) characterizing the light source in terms of spatial distribution and light intensity, providing (420) a convolution kernel specific to the shooting device, calculating (430) a convolution product of the light intensity map and the kernel to obtain a corrective value map (CVM), and removing the corrective value map from the digital image pixel by pixel to produce a corrected image (Icorr) in which the halo is not present.
SYSTEMS AND METHODS FOR PROJECTING IMAGES FROM LIGHT FIELD DISPLAYS BASED ON POSITIONAL TRACKING DATA
Systems and methods presented herein include light field displays configured to display primary autostereoscopic images and to simultaneously project light rays toward display devices (e.g., either reflective devices or cameras) to display secondary autostereoscopic images via the display devices. The light rays projected from the light field displays are controlled by a control system based at least in part on positional tracking data (e.g., position, orientation, and/or movement) of the display devices and/or of a portion of humans associated with the display devices, which may be detected via sensors of the display devices and/or via cameras disposed about a physical environment within which the display devices and the humans are located. Specifically, the control system calculates light field vector functions for light rays to be projected toward each individual display device based at least in part on positional tracking data for that particular display device and/or its associated human.
SYSTEMS AND METHODS FOR PROJECTING IMAGES FROM LIGHT FIELD DISPLAYS BASED ON REFLECTED LIGHT RAYS
Systems and methods presented herein include light field displays configured to display primary autostereoscopic images and to simultaneously project (e.g., in real time, while displaying their own primary autostereoscopic images) light rays toward display devices (e.g., either reflective devices or cameras) to display secondary autostereoscopic images via the display devices. The light rays projected from the light field displays are controlled by a control system based at least in part on positional data (e.g., position, orientation, and/or movement) of the display devices, which may be determined by the control system based at least in part on detection of light rays that are reflected off the display devices.
Multi-directional backlight, multi-user multiview display, and method
A multi-directional backlight and a multi-user multiview display provide emitted light and associated multiview images having different mutually exclusive angular ranges and different user-specific view zones. The multi-directional backlight includes first and second multiview backlights, each of which includes multibeam elements configured to provide emitted light having directional light beams with directions corresponding to view directions of a respective multiview images. The emitted light provided by the first multiview backlight has a first angular range that is mutually exclusive of a second angular range of emitted light provided by the second multiview backlight, at respective first and second convergence distances. The multi-user multiview display includes a first multiview display configured to provide a first multiview image to a first user in a first view zone and a second multiview display configured to provide a second multiview image to a second user in a second view zone.
MEDICAL DEVICE WITH A DISPLAY AND WITH A PROCESSING UNIT AND METHOD THEREFOR
The disclosure relates to a medical device with a display and with a processing unit and method therefor. The processing unit is suitable for detecting states of one or more technical units of the medical device. The processing unit is further set up to control the display on the basis of detected states in order to output states of the medical device. The display is an autostereographic display. The processing unit, based on an evaluation of detected states, drives the display in such a way that a first state is visually highlighted with respect to a 3D representation, while a second state is visually not highlighted with respect to a 3D representation.
Multi-view display control
A multi-view display controller determines view angles for each view of a multi-view media content for each viewer watching a multi-view display. The view angles determined for a viewer collectively define a viewer cone that displays the views onto the viewer. Media data of the multi-view media content is output together with information of the determined view angles to the multi-view display in order to allow each viewer to have the same experience of displayed media content regardless of where the viewer is positioned relative to the multi-view display.
Apparatus for adapting a presentation of media content according to a position of a viewing apparatus
A system that incorporates teachings of the present disclosure may include, for example, a non-transitory computer-readable storage medium having computer instructions to detect a first position of a viewing apparatus, wherein the viewing apparatus enables viewing of media programs, obtain a media program in a first viewing perspective that conforms to the first position, present the media program with the first viewing perspective for viewing by way of the viewing apparatus, and transmit to the viewing apparatus a first audio signal corresponding to the first viewing perspective. The storage medium can also have computer instructions to detect that the viewing apparatus has moved to a second position, obtain the media program in a second viewing perspective according to the second position, and present the media program with the second viewing perspective for viewing by way of the viewing apparatus. Other embodiments are disclosed and contemplated.
Systems and methods for projecting images from light field displays based on positional tracking data
Systems and methods presented herein include light field displays configured to display primary autostereoscopic images and to simultaneously project light rays toward display devices (e.g., either reflective devices or cameras) to display secondary autostereoscopic images via the display devices. The light rays projected from the light field displays are controlled by a control system based at least in part on positional tracking data (e.g., position, orientation, and/or movement) of the display devices and/or of a portion of humans associated with the display devices, which may be detected via sensors of the display devices and/or via cameras disposed about a physical environment within which the display devices and the humans are located. Specifically, the control system calculates light field vector functions for light rays to be projected toward each individual display device based at least in part on positional tracking data for that particular display device and/or its associated human.