Patent classifications
H04N2213/005
Vehicle display system for low visibility objects and adverse environmental conditions
A display system for use in a vehicle is disclosed including a camera configured to capture frames corresponding to a field of view. The camera is in communication with a processing unit configured to receive data representative of objects within the captured frames from the camera. A display is in communication with the processing unit which is configured to display objects based on the data representative of the captured frames received by the processing unit. Multi-frame capture is utilized with differential inter-frame illumination to provide low-visibility object detection and display. Adverse environmental conditions whereby image display is enhanced include fog, rain, snow, dust, nighttime, and high-glare scenarios.
DUAL MODE DEPTH ESTIMATOR
A system-on-chip is provided which is configured for real-time depth estimation of video data. The system-on-chip includes a monoscopic depth estimator configured to perform monoscopic depth estimation from monoscopic-type video data, and a stereoscopic depth estimator configured to perform stereoscopic depth estimation from stereoscopic-type video data. The system-on-chip is reconfigurable to perform either the monoscopic depth estimation or the stereoscopic depth estimation on the basis of configuration data defining a selected depth estimation mode. Both depth estimators include shared circuits which are instantiated in hardware and reconfigurable to account for differences in the functionality of the circuit in each depth estimator.
Display apparatus and operating method of the same
An method of operating a display apparatus is provided. The operating method includes calculating a confidence of a result obtained by performing light field rendering on each of pixels of a display panel based on positions of a left eye and a right eye, determining a weight kernel of a corresponding pixel based on the confidence, and adjusting a brightness of a pixel corresponding to each of the left eye and the right eye based on the weight kernel.
Three-Dimensional Image Sensors
A single image sensor including an array of uniformly and continuously spaced light-sensing pixels in conjunction with a plurality of lenses that focus light reflected from an object onto a plurality of different pixel regions of the image sensor, each lens focusing light on a different one of the pixel regions enables a controller, including a processor and an object detection module, coupled to the single image to analyze the pixel regions, to generate a three-dimensional (3D) image of the object through a plurality of images obtained with the image sensor, generate a depth map that calculates depth values for pixels of at least the object, detect 3D motion of the object using the depth values, create a 3D model of the object based on the 3D image, and track 3D motion of the object based on the 3D model.
METHODS AND APPARATUS FOR AN ACTIVE PULSED 4D CAMERA FOR IMAGE ACQUISITION AND ANALYSIS
An active-pulsed four-dimensional camera system that utilizes a precisely-controlled light source produces spatial information and human-viewed or computer-analyzed images. The acquisition of four-dimensional optical information is performed at a sufficient rate to provide accurate image and spatial information for in-motion applications where the camera is in motion and/or objects being imaged, detected and classified are in motion. Embodiments allow for the reduction or removal of image-blocking conditions like fog, snow, rain, sleet, and dust from the processed images. Embodiments provide for operation in daytime or nighttime conditions and can be utilized for day or night full-motion video capture with features like shadow removal. Multi-angle image analysis is taught as a method for classifying and identifying objects and surface features based on their optical reflective characteristics.
Privacy image generation system
A privacy image generation system may use a light field camera that includes an array of cameras or an RGBZ camera(s)) is used to capture images and display images according to a selected privacy mode. The privacy mode may include a blur background mode and a background replacement mode and can be automatically selected based on the meeting type, participants, location, and device type. A region of interest and/or an object(s) of interest (e.g. one or more persons in a foreground) is determined and the privacy image generation system is configured to clearly show the region/object of interest and obscure or replace the background according to the selected privacy mode. The displayed image includes the region/object(s) of interest clearly shown (e.g. in focus) and any objects in a background of the combined image shown having a limited depth of field (e.g. blurry/not in focus) and/or the background replaced with another image and/or fill.
Frame compatible depth map delivery formats for stereoscopic and auto-stereoscopic displays
Stereoscopic video data and corresponding depth map data for stereoscopic and auto-stereoscopic displays are coded using a coded base layer and one or more coded enhancement layers. Given a 3D input picture and corresponding input depth map data, a side-by-side and a top-and-bottom picture are generated based on the input picture. Using an encoder, the side-by-side picture is coded to generate a coded base layer Using the encoder and a texture reference processing unit (RPU), the top-and-bottom picture is encoded to generate a first enhancement layer, wherein the first enhancement layer is coded based on the base layer stream, and using the encoder and a depth-map RPU, depth data for the side-by-side picture are encoded to generate a second enhancement layer, wherein the second enhancement layer is coded based on to the base layer. Alternative single, dual, and multi-layer depth map delivery systems are also presented.
PRIVACY IMAGE GENERATION
A privacy image generation system may use a light field camera that includes an array of cameras or an RGBZ camera(s)) is used to capture images and display images according to a selected privacy mode. The privacy mode may include a blur background mode that can be automatically selected based on the meeting type, participants, location, and device type. A region of interest and/or an object(s) of interest (e.g. one or more persons in a foreground) is determined and the privacy image generation system is configured to clearly show the region/object of interest and obscure or replace the background by combining multiple images. The displayed image includes the region/object(s) of interest clearly shown (e.g. in focus) and any objects in a background of the combined image shown having a limited depth of field (e.g. blurry/not in focus) and/or blurred due to the combination of the multiple images.
Near-eye display apparatus and method of displaying three-dimensional images
A near-eye display apparatus for displaying a three-dimensional image to a user. The apparatus includes an image projecting means to project pairs of images associated with different cross-sectional planes of the three-dimensional image; at least one optical display arrangement including a plurality of optical elements, wherein each of the plurality of optical elements is operable to be switched between a first optical state and a second optical state; a control arrangement that is operable to control the at least one optical display arrangement to separately switch each optical element from the first optical state to the second optical state, and the image projecting means to project a separate pair of images on each optical element in the second optical state; and at least one optical device that allows display of the three-dimensional image to each eye of the user.
Method for sub-range based coding a depth lookup table
The invention relates to a method (200) for sub-range based coding a depth lookup table (300), the depth lookup table comprising depth values of a 3D video sequence, the depth values being constrained within a range (301), the method comprising: partitioning (201) the range (301) into a plurality of sub-ranges, a first sub-range (303) comprising a first set of the depth values and a second sub-range (305) comprising a second set of the depth values; and coding (203) the depth values of each of the sub-ranges of the depth lookup table (300) separately according to a predetermined coding rule.