Patent classifications
H04N5/2226
Obtaining image data of an object in a scene
A method and processor system are provided which analyze a depth map, which may be obtained from a range sensor capturing depth information of a scene, to identify where an object is located in the scene. Accordingly, a region of interest may be identified in the scene which includes the object, and image data may be selectively obtained of the region of interest, rather than of the entire scene containing the object. This image data may be acquired by an image sensor configured for capturing visible light information of the scene. By only selectively obtaining the image data within the region of interest, rather than all of the image data, improvements may be realized in the computational complexity of a possible further processing of the image data, the storage of the image data and/or the transmission of the image data.
Integrated Camera System Having Two Dimensional Image Capture and Three Dimensional Time-of-Flight Capture With A Partitioned Field of View
An apparatus is described that includes an integrated two-dimensional image capture and three-dimensional time-of-flight depth capture system. The three-dimensional time-of-flight depth capture system includes an illuminator to generate light. The illuminator includes arrays of light sources. Each of the arrays is dedicated to a particular different partition within a partitioned field of view of the illuminator.
IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, AND IMAGE PROCESSING METHOD
Provided is an image processing apparatus, including: an acquisition unit configured to acquire information on a layer boundary in tomographic structure of a current subject to be inspected; a determination unit configured to determine a depth range relating to a current en-face image of the subject to be inspected based on information indicating a depth range relating to a past en-face image of the subject to be inspected and the information on the layer boundary; and a generation unit configured to generate the current en-face image through use of data within the depth range relating to the current en-face image among pieces of three-dimensional data acquired for the current subject to be inspected.
Multi-Baseline Camera Array System Architectures for Depth Augmentation in VR/AR Applications
Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
A METHOD FOR CODING SPACE INFORMATION IN CONTINUOUS DYNAMIC IMAGES
A coding method for space information in continuous dynamic images is provided, which includes the following steps: parsing and extracting space information data, constructing a space information data packet, and coding the space information data packet. According to the coding method in the present invention, the physical parameters, such as lens, position and orientation of a camera as well as space depth information in a plurality of continuous dynamic images can be recorded and stored in real time, and therefore, the coded and stored parameters of the camera and the space depth information are applied to virtual simulation and graphic vision enhancement scenarios, such that in many application scenarios, such as photographing movie and television, producing advertising, and personal video vlogs, abundant and integrated graphic text enhancement effects can be implanted, in real time, a plurality of thereby improving the final image display effect.
VARIED DEPTH DETERMINATION USING STEREO VISION AND PHASE DETECTION AUTO FOCUS (PDAF)
Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using, stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vison, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
ACTIVE MARKER RELAY SYSTEM FOR PERFORMANCE CAPTURE
An active marker relay system is provided to operate responsive active markers coupled to an object in a live action scene for performance capture, via a trigger unit that relays energy pulse information to responsive active markers. Using use simple sensors, the responsive active markers sense control energy pulses projected from the trigger unit. In return, the responsive active markers produce energy pulses that emulate at least one characteristic of the control energy pulses, such as a particular pulse rate or wavelength of energy. The reactivity of the responsive active markers to control energy pulses enables simple control of the responsive active markers through the trigger unit.
READOUT CIRCUIT AND METHOD FOR TIME-OF-FLIGHT IMAGE SENSOR
A time-of-flight device comprises a pixel array including an array of pixel circuits, wherein a column of the array includes: a first pixel circuit including a first photodiode, a first capacitor and a second capacitor coupled to the first photodiode, and a second pixel circuit including a second photodiode, a third capacitor and a fourth capacitor coupled to the second photodiode, a first signal line coupled to the first capacitor, a second signal line coupled to the second capacitor, a third signal line coupled to the third capacitor, a fourth signal line coupled to the fourth capacitor, a first switch circuitry, a second switch circuitry, a first comparator coupled to the first signal line and the third signal line through the first switch circuitry, and a second comparator coupled to the second signal line and the fourth signal line through the second switch circuitry.
Wide-angle 3D sensing
Aspects of the present disclosure relate to depth sensing using a device. An example device includes a first light projector configured to project light towards a second light projector configured to project light towards the first light projector. The example device includes a reflective component positioned between the first and second light projectors, the reflective component configured to redirect the light projected by the first light projector onto a first portion of a scene and to redirect the light projected by the second light projector onto a second portion of the scene, and the first and second portions of the scene being adjacent to one another and non-overlapping relative to one another. The example device includes a receiver configured to detect reflections of redirected light projected by the first and second light projectors.
Visual, depth and micro-vibration data extraction using a unified imaging device
A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.