Patent classifications
G06T7/593
Method and apparatus for sensing moving ball
Provided are an apparatus and method for sensing a moving ball, which extract a feature portion such as a trademark, a logo, etc. indicated on a ball from consecutive images of a moving ball, acquired by an image acquisition unit embodied by a predetermined camera device, and calculate a spin axis and spin amount of rotation the moving ball based on the feature portion and thus spin of the ball is simply, rapidly, and accurately calculated with low computational load, thereby achieving rapid and stable calculation of the ball in a relatively low performance system. The sensing apparatus includes an image acquisition unit for acquiring consecutive images, an image processing unit for extracting a feature portion from the acquired image, and a spin calculation unit for calculating spin using the extracted feature portion.
Method and apparatus for sensing moving ball
Provided are an apparatus and method for sensing a moving ball, which extract a feature portion such as a trademark, a logo, etc. indicated on a ball from consecutive images of a moving ball, acquired by an image acquisition unit embodied by a predetermined camera device, and calculate a spin axis and spin amount of rotation the moving ball based on the feature portion and thus spin of the ball is simply, rapidly, and accurately calculated with low computational load, thereby achieving rapid and stable calculation of the ball in a relatively low performance system. The sensing apparatus includes an image acquisition unit for acquiring consecutive images, an image processing unit for extracting a feature portion from the acquired image, and a spin calculation unit for calculating spin using the extracted feature portion.
Passive wide-area three-dimensional imaging
Radar, lidar, and other active 3D imaging techniques require large, heavy sensors that consume lots of power. Passive 3D imaging techniques based on feature matching are computationally expensive and limited by the quality of the feature matching. Fortunately, there is a robust, computationally inexpensive way to generate 3D images from full-motion video acquired from a platform that moves relative to the scene. The full-motion video frames are registered to each other and mapped to the scene coordinates using data about the trajectory of the platform with respect to the scene. The time derivative of the registered frames equals the product of the height map of the scene, the projected angular velocity of the platform, and the spatial gradient of the registered frames. This relationship can be solved in (near) real time to produce the height map of the scene from the full-motion video and the trajectory.
DEVICE AND METHOD FOR DEPTH ESTIMATION USING COLOR IMAGES
The present disclosure relates to methods and devices for performing depth estimation on image data. In one example, a device performs depth estimation on first and second images captured using one or more cameras having a color filter array. Each, image of the first and second images comprises multiple color channels. Each color channel of the multiple color channels corresponds to a respective color channel of the color filter array.sub.. The, device performs the depth estimation by estimating disparity from the color channels of the first and second images.
DEVICE AND METHOD FOR DEPTH ESTIMATION USING COLOR IMAGES
The present disclosure relates to methods and devices for performing depth estimation on image data. In one example, a device performs depth estimation on first and second images captured using one or more cameras having a color filter array. Each, image of the first and second images comprises multiple color channels. Each color channel of the multiple color channels corresponds to a respective color channel of the color filter array.sub.. The, device performs the depth estimation by estimating disparity from the color channels of the first and second images.
DEVICE AND METHOD FOR ACQUIRING DEPTH OF SPACE BY USING CAMERA
A device and method of obtaining a depth of a space are provided. The method includes obtaining a plurality of images by photographing a periphery of a camera a plurality of times while sequentially rotating the camera by a preset angle, identifying a first feature region in a first image and an n-th feature region in an n-th image, the n-th feature region being identical with the first feature region, by comparing adjacent images between the first image and the n-th image from among the plurality of images, obtaining a base line value with respect to the first image and the n-th image, obtaining a disparity value between the first feature region and the n-th feature region, and determining a depth of the first feature region or the n-th feature region based on at least the base line value and the disparity value.
System and method to simultaneously track multiple organisms at high resolution
A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.
System and method to simultaneously track multiple organisms at high resolution
A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.
Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.
Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.