Patent classifications
G06T7/596
Determining ranges by imaging devices with dynamic baseline reconfiguration
An aerial vehicle may be outfitted with two or more digital cameras that are mounted to a track, a rail or another system for accommodating relative motion between the cameras. A baseline distance between the cameras may be established by repositioning one or more of the cameras. Images captured by the cameras may be processed to recognize one or more objects therein, and to determine ranges to such objects by stereo triangulation techniques. The baseline distances may be varied by moving one or more of the cameras, and ranges to objects may be determined using images captured by the cameras at each of the baseline distances.
Parameterizing 3D scenes for volumetric viewing
A target view to a 3D scene depicted by a multiview image is determined. The multiview image comprising sampled views at sampled view positions distributed throughout a viewing volume. Each sampled view in the sampled views comprises a wide-field-of-view (WFOV) image and a WFOV depth map as seen from a respective sampled view position in the sampled view positions. The target view is used to select, from the sampled views, a set of sampled views. A display image is caused to be rendered on a display of a wearable device. The display image is generated based on a WFOV image and a WFOV depth map for each sampled view in the set of sampled views.
REAL-TIME TRACKING FOR THREE-DIMENSIONAL IMAGING
A system, comprising: a stereoscopic camera configured to acquire multiple pairs of images of a surface; a display; and a processor configured to: sequentially acquire multiple image pairs of a surface from the camera; incrementally construct a 3D model from the image pairs concurrently with the sequential image acquisition, by: for each currently acquired image pair, registering the currently acquired image pair to a location on the 3D model, and adding the currently acquired image pair to the 3D model when: a) the registration succeeds and b) a delta of the registered image pair exceeds a threshold; rendering the incremental construction of the 3D model on the display; and concurrently tracking the incremental construction by displaying a graphic indicator that simultaneously indicates: i) the registered location, ii) when the viewing distance is within a focal range, and iii) when the viewing distance is not within a focal range.
Electronic device for generating 360-degree three-dimensional image and method therefor
The present disclosure relates to an electronic device for capturing a plurality of images using a plurality of cameras, generating a left-eye-view spherical image and a right-eye-view spherical image by classifying each of the plurality of images as a left-eye-view image or a right-eye-view image, obtaining depth information using the generated left-eye-view spherical image and right-eye-view spherical image, and generating a 360 degree three-dimensional image, wherein the three-dimensional effect thereof is controlled using the obtained depth information, and an image processing method therefor.
Light field based reflection removal
A method of processing light field images for separating a transmitted layer from a reflection layer. The method comprises capturing a plurality of views at a plurality of viewpoints with different polarization angles; obtaining an initial disparity estimation for a first view using SIFT-flow, and warping the first view to a reference view; optimizing an objective function comprising a transmitted layer and a secondary layer using an Augmented Lagrange Multiplier (ALM) with Alternating Direction Minimizing (ADM) strategy; updating the disparity estimation for the first view; repeating the steps of optimizing the objective function and updating the disparity estimation until the change in the objective function between two consecutive iterations is below a threshold; and separating the transmitted layer and the secondary layer using the disparity estimation for the first view.
Method and system for determining plant leaf surface roughness
Provided is a method and system for determining plant leaf surface roughness. The method includes: acquiring a plurality of continuously captured zoomed-in leaf images by using a zoom microscope image capture system; determining a feature match set according to the zoomed-in leaf images; removing de-noised images of which the number of feature matches in feature match set is less than a second set threshold, to obtain n screened images; combining the n screened images to obtain a combined grayscale image; and determining plant leaf surface roughness according to the combined grayscale image. In the present disclosure, first, a plurality of zoomed-in leaf images are directly acquired by the zoom microscope image capture system quickly and accurately; the zoomed-in leaf images are then screened and combined to form a combined grayscale image; finally, three-dimensional roughness of a plant leaf surface is determined quickly and accurately according to the combined grayscale image.
Depth camera resource management
The described implementations relate to managing depth cameras. One example can include a depth camera that includes an emitter for illuminating light on a scene and a sensor for sensing light reflected from the scene. The example can also include a resource-conserving camera control component configured to determine when the scene is static by comparing captures and/or frames of the scene from the sensor. The resource-conserving camera control component can operate the depth camera in resource constrained modes while the scene remains static.
DUAL-MODE DATA CAPTURE SYSTEM FOR COLLISION DETECTION AND OBJECT DIMENSIONING
A dual-mode data capture system includes a capture controller, a point cloud generator, a collision detector, a plurality of cameras viewing a capture volume, and a motion sensor to generate a detection signal when an object arrives at a capture position within the volume. The controller: activates a subset of cameras in a collision detection mode to capture sequences of images of the volume; responsive to receiving the detection signal, activates the cameras in a dimensioning mode to capture a synchronous set of images of the capture position. The collision detector: determines whether the sequences of images indicate a potential collision; and responsive to detection of a potential collision, generates a warning. The point cloud generator: receives the synchronous set of images and generates a point cloud representing the object based on the synchronous set of images, for use in determining dimensions of the object.
METHODS AND APPARATUS FOR GENERATING A THREE-DIMENSIONAL RECONSTRUCTION OF AN OBJECT WITH REDUCED DISTORTION
Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.
METHOD AND DEVICE FOR OBTAINING 3D IMAGES
A method and device are provided for obtaining a 3D image. The method includes sequentially projecting a plurality of beams to an object, each of the plurality of projected beams corresponding to a respective one of a plurality of sectors included in a pattern; detecting a plurality of beams reflected off of the object corresponding to the plurality of projected beams; identifying time-of-flight (ToF) of each of the plurality of projected beams based on the plurality of detected beams; identifying a distortion of the pattern, which is caused by the object, based on the plurality of detected beams; and generating a depth map for the object based on the distortion of the pattern and the ToF of each of the plurality of projected beams, wherein the plurality of detected beams are commonly used to identify the ToF and the distortion of the pattern.