Patent classifications
G06T7/596
DYNAMIC-BASELINE IMAGING ARRAY WITH REAL-TIME SPATIAL DATA CAPTURE AND FUSION
Spatial image data captured at plural camera modules is fused into rectangular prism coordinates to support rapid processing and efficient network communication. The rectangular prism spatial imaging data is remapped to a truncated pyramid at render time to align with a spatial volume encompassed by a superset of imaging devices. A presentation of a reconstructed field of view is provide with near and far field image capture from the plural imaging devices.
METHOD AND APPARATUS FOR PROCESSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
Disclosed are a method and apparatus for processing an image, an electronic device and a storage medium. A specific implementation comprises: acquiring a matching association relationship of a feature point in each to-be-modeled image frame in a to-be-modeled image frame set, a plurality of to-be-modeled image frames in the to-be-modeled image frame set belonging to at least two different to-be-modeled image sequences; determining a first feature point set of the each to-be-modeled image frame based on the matching association relationship, the first feature point set including a first feature point, and the first feature point matching a corresponding feature point in a to-be-modeled image frame in a different to-be-modeled image sequence; and selecting, based on a number of the first feature point in the first feature point set in the each to-be-modeled image frame, a to-be-modeled image frame from the to-be-modeled image frame set for a three-dimensional reconstruction.
DENSE DEPTH COMPUTATIONS AIDED BY SPARSE FEATURE MATCHING
A system for dense depth computation aided by sparse feature matching generates a first image using a first camera, a second image using a second camera, and a third image using a third camera. The system generates a sparse disparity map using the first image and the third image by (1) identifying a set of feature points within the first image and a set of corresponding feature points within the third image, and (2) identifying feature disparity values based on the set of feature points and the set of corresponding feature points. The system also applies the first image, the second image, and the sparse disparity map as inputs for generating a dense disparity map.
PLATE RECONSTRUCTION OF OBSCURED VIEWS OF A MAIN IMAGING DEVICE USING CAPTURE DEVICE INPUTS OF THE SAME SCENE
An imagery processing system obtains capture inputs from capture devices that might have capture parameters and characteristics that differ from those of a main imagery capture device. By normalizing outputs of those capture devices, potentially arbitrary capture devices could be used for reconstructing portions of a scene captured by the main imagery capture device when reconstructing a plate of the scene to replace an object in the scene with what the object obscured in the scene. Reconstruction could be of one main image, a stereo pair of images, or some number, N, of images where N>2.
Electronic device moved based on distance from external object and control method thereof
An electronic device is disclosed. The electronic device includes a sensor, an actuator, and a processor. The sensor is configured to sense at least one external object in a direction of 360 degrees outside the electronic device. The actuator configured to allow the electronic device to move or yaw. The processor is configured to verify an angle corresponding to a location of the at least one external object among the 360 degrees and a distance between the at least one external object and the electronic device using the sensor. When the distance does not belong to a specified range, the processor is also configured to move the electronic device in a direction corresponding to the angle using the actuator such that the distance belongs to the specified range.
Information processing apparatus and method, vehicle, and information processing system
An automobile-mounted imaging apparatus and a computer readable storage medium for detecting a distance to at least one object. The apparatus comprises circuitry configured to select at least two images from images captured by at least three cameras to use for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to select two cameras of at least three cameras for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to determine which of at least two cameras to use for detecting the distance to the at least one object based on at least one condition from at least three cameras capturing images.
Image processing apparatus, system that generates virtual viewpoint video image, control method of image processing apparatus and storage medium
To prevent an object that should exist in a virtual viewpoint video image from disappearing. The image processing apparatus generates three-dimensional shape data on a moving object from images based on image capturing from a plurality of viewpoints and outputs the data to the apparatus that generates a virtual viewpoint video image. Then, in a case where it is not possible to generate three-dimensional shape data on an object that behaves as the moving object during a part of the period of the image capturing, three-dimensional shape data on the object generated in the past is output to the apparatus that generates a virtual viewpoint video image.
Method and Apparatus for Obtaining Extended Depth of Field Image and Electronic Device
A method and an apparatus for obtaining an extended depth of field image, and an electronic device are disclosed. The method includes: determining a target focal length range based on an initial focal length (101), where the target focal length range includes the initial focal length; obtaining a plurality of images of a photographed object for a plurality of focal lengths in the target focal length range (102); and registering and fusing, the plurality of images to obtain an extended depth of field image (103). The target focal length range of concern to a user is selected based on the initial focal length, so that there is no need to obtain images of all focal lengths of a lens, a quantity of obtained images and processing time of registration and fusion can be reduced.
Depth acquisition device and depth acquisition method for providing a corrected depth image
A depth acquisition device includes a memory and a processor performing: acquiring, from the memory, intensities of infrared light measured by imaging with infrared light emitted from a light source and reflected on a subject by pixels in an imaging element; generating a depth image by calculating the distance for each pixel based on the intensities; acquiring, from the memory, a visible light image generated by imaging, with visible light, the substantially same scene from the substantially same viewpoint at the substantially same timing as those of the infrared light image; detecting a lower reflection region showing an object having a lower reflectivity from the infrared light image in accordance with the infrared light image and the visible light image; correcting a corresponding lower reflection region in the depth image in accordance with the visible light image; and outputting the depth image with the corrected lower reflection region.
Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium
There is provided an image capturing apparatus that captures a plurality of images, calculates a three-dimensional position from the plurality of images, and outputs the plurality of images and information about the three-dimensional position. The image capturing apparatus includes an image capturing unit, a camera parameter storage unit, a position calculation unit, a position selection unit, and an image complementing unit. The image capturing unit outputs the plurality of images using at least three cameras. The camera parameter storage unit stores in advance camera parameters including occlusion information. The position calculation unit calculates three dimensional positions of a plurality of points. The position selection unit selects a piece of position information relating to a subject area that does not have an occlusion, and outputs selected position information. The image complementing unit generates a complementary image, and outputs the complementary image and the selected position information.