Patent classifications
G06T2207/20088
VIDEO PROCESSING TECHNIQUE FOR 3D TARGET LOCATION IDENTIFICATION
A method and system for determining the location of objects using a plurality of full motion video cameras where the location is based on the intersecting portions of a plurality of three-dimensional shapes that are generated from the video data provided by the cameras. The three-dimensional shapes include two-dimensional shapes that contain predetermined traits in each of the frames of the video signals.
Device, system and method for automatic calibration of image devices
A device, system and method for automatic calibration of image devices is provided. Triplets of at least three image devices, including a projector, are in non-collinear arrangements, and pairs of the image devices have overlapping fields of view on a physical object. Pixel correspondences between the pairs are used to determine relative vectors between the image devices. Relative locations between each of the image devices are determined based on the relative vectors with a relative distance between one pair of the image devices is to an arbitrary distance, the relative locations being further relative to a cloud-of-points representing the object. A model of the object and the cloud-of-points are aligned to transform the relative locations of each of the image devices to locations relative to the model. The projector is controlled to project onto the object based at least on the locations relative to the model.
THREE-DIMENSIONAL RECONSTRUCTION METHOD
Provided is a three-dimensional reconstruction method of reconstructing a three-dimensional model from multi-view images. The method includes: selecting two frames from the multi-view images; calculating image information of each of the two frames; selecting a method of calculating corresponding keypoints in the two frames, according to the image information; and calculating the corresponding keypoints using the method of calculating corresponding keypoints selected in the selecting of the method of calculating corresponding keypoints.
Image processing apparatus, image processing method, and non-transitory computer readable storage medium
An image processing apparatus according to the present application includes a reception unit and a specification unit. The reception unit receives image data produced through image capturing by a predetermined image capturing apparatus and including an elliptical figure. The specification unit performs projection transform of the image data so that the elliptical figure included in the image data received by the reception unit appears to be an exact circle, and specifies, based on characteristic information on the exact circle obtained through the projection transform, the exact circle to be a marker used in predetermined processing on the image data.
MAPPING AND TRACKING SYSTEM WITH FEATURES IN THREE-DIMENSIONAL SPACE
LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps.
MAPPING AND TRACKING SYSTEM WITH FEATURES IN THREE-DIMENSIONAL SPACE
LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps.
Object Tracking By An Unmanned Aerial Vehicle Using Visual Sensors
Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.
Video processing technique for 3D target location identification
A method and system for determining the location of objects using a plurality of full motion video cameras where the location is based on the intersecting portions of a plurality of three-dimensional shapes that are generated from the video data provided by the cameras. The three-dimensional shapes include two-dimensional shapes that contain predetermined traits in each of the frames of the video signals.
Noncontact metrology probe, process for making and using same
Disclosed is a noncontact metrology probe including: a first camera including a first field of view; a second camera including a second field of view and arranged such that the second field of view overlaps the first field of view to form a prime focal volume; a third camera including a third field of view and arranged such that the third field of view overlaps the prime focal volume to form a probe focal volume; and a tracker including a tracker field of view to determine a location of the probe focal volume in the tracker field of view. Further disclosed is a process for calibrating a noncontact metrology probe, the process including: providing a noncontact metrology probe including: a first camera including a first field of view; a second camera including a second field of view; a third camera including a third field of view; and a tracker including a tracker field of view; overlapping the first field of view with the second field of view to form a prime focal volume; overlapping the prime focal volume with the third field of view to form a probe focal volume; and overlapping the a tracker field of view with the probe focal volume to calibrate the noncontact metrology probe.
OBJECT TRACKING BY AN UNMANNED AERIAL VEHICLE USING VISUAL SENSORS
Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.