Patent classifications
H04N2213/005
Method for controlling multi-field of view image and electronic device for supporting the same
An electronic device is provided. The electronic device includes a memory configured to store an image of a multi-field of view (multi-FOV) including an image of a first FOV and an image of a second FOV, a display configured to output the image of the multi-FOV, and a processor configured to be electrically connected with the memory and the display. The process is configured to control to output the image of the first FOV on the display, verify at least one event which meets a condition from the image of the second FOV, and control to provide a notification corresponding to the event in connection with the image of the first FOV, the image being output on the display.
METHOD FOR DISPLAYING, ON A 2D DISPLAY DEVICE, A CONTENT DERIVED FROM LIGHT FIELD DATA
The present disclosure concerns a method for displaying, on a 2D display device, a content derived from 4D light field data, based on a viewing position of a user. The 4D light field data corresponds to data acquired by either several cameras or by a plenoptic device. The method comprises: obtaining a volume in front of said 2D display device in which no disocclusions are present, said volume being defined according to optical and geometry parameters of an acquisition device that has acquired said 4D light field data, a size of a screen of said 2D display device, and an anchor plane in said content, said anchor plane being perceived as static in case of movement of a user relative to said 2D display device; determining a modified volume from said volume, comprising modifying a size of said volume, for modifying possible movements of a user positioned within the modified volume compared to movements of said user within said volume; providing means for guiding said user within said modified volume according to said viewing position of said user.
Dual mode depth estimator
A system-on-chip is provided which is configured for real-time depth estimation of video data. The system-on-chip includes a monoscopic depth estimator configured to perform monoscopic depth estimation from monoscopic-type video data, and a stereoscopic depth estimator configured to perform stereoscopic depth estimation from stereoscopic-type video data. The system-on-chip is reconfigurable to perform either the monoscopic depth estimation or the stereoscopic depth estimation on the basis of configuration data defining a selected depth estimation mode. Both depth estimators include shared circuits which are instantiated in hardware and reconfigurable to account for differences in the functionality of the circuit in each depth estimator.
Image processing method and apparatus
An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.
Object identification and material assessment using optical profiles
An image processing system with a camera system and a processing system analyzes a series of images of a scene to assess an optical property of an object in the scene. A surface of an object that is differentiable from other objects and has a common point that can be identified and analyzed in the series of images captured from different distances of the camera system to the object and different angular orientations of the camera system relative to the object. A set of characteristic values of the image pixels corresponding to the common point is determined that include a point intensity value, a distance from the camera system to the common point, a normal vector for the common point, and an angular orientation between an optical path of the image pixel and the normal vector. The set of characteristics values are used to create an optical profile of the common point that is compared to a set of predefined characteristic profiles to identify the object. In embodiments, the camera system is an active camera, and the wavelength includes near infrared.
Camera Module and Extended Reality System Using the Same
A camera module, for a head-mounted display (HMD), includes a first optical module for tracking a hand motion of a user; a second optical module for reconstructing a hand gesture or a step of the user and a space; a third optical module for establishing a three-dimensional (3D) virtual object; and a control unit for integrating with the first optical module, the second optical module and the third optical module to virtualize a body behavior of the user; wherein the camera module is rotatable to maximize a tracking range.
Versatile 3-D picture format
A 3-D picture signal is provided as follows. An image and depth components having a depth map for the image are provided, the depth map includes depth indication values. A depth indication value relates to a particular portion of the image and indicates a distance between an object at least partially represented by that particular portion of the image and the viewer. The 3-D picture signal conveys the 3-D picture according to a 3D format having image frames encoding the image. Extra frames (D, D) are encoded that provide the depth components and further data for use in rendering based on the image and the depth components. The extra frames are encoded using spatial and/or temporal subsampling of the depth components and the further data, while the extra frames are interleaved with the image frames in the signal in a Group of Pictures coding structure (GOP).
HYBRID DEPTH SENSING PIPELINE
An apparatus for a hybrid tracking and mapping is described herein. The apparatus includes logic to determine a plurality of depth sensing techniques. The apparatus also includes logic to vary the plurality of depth sensing techniques based on a camera configuration. Additionally, the apparatus includes logic to generate a hybrid tracking and mapping pipeline based on the depth sensing techniques and the camera configuration.
METHODS AND APPARATUS FOR AN ACTIVE PULSED 4D CAMERA FOR IMAGE ACQUISITION AND ANALYSIS
An active-pulsed four-dimensional camera system that utilizes a precisely-controlled light source produces spatial information and human-viewed or computer-analyzed images. The acquisition of four-dimensional optical information is performed at a sufficient rate to provide accurate image and spatial information for in-motion applications where the camera is in motion and/or objects being imaged, detected and classified are in motion. Embodiments allow for the reduction or removal of image-blocking conditions like fog, snow, rain, sleet, and dust from the processed images. Embodiments provide for operation in daytime or nighttime conditions and can be utilized for day or night full-motion video capture with features like shadow removal. Multi-angle image analysis is taught as a method for classifying and identifying objects and surface features based on their optical reflective characteristics.
Three-dimensional image sensors
A single image sensor including an array of uniformly and continuously spaced light-sensing pixels in conjunction with a plurality of lenses that focus light reflected from an object onto a plurality of different pixel regions of the image sensor, each lens focusing light on a different one of the pixel regions enables a controller, including a processor and an object detection module, coupled to the single image to analyze the pixel regions, to generate a three-dimensional (3D) image of the object through a plurality of images obtained with the image sensor, generate a depth map that calculates depth values for pixels of at least the object, detect 3D motion of the object using the depth values, create a 3D model of the object based on the 3D image, and track 3D motion of the object based on the 3D model.