Patent classifications
G06T7/207
Evaluation value calculation device and electronic endoscope system
An electronic endoscope system includes a plotting unit which plots pixel correspondence points, which correspond to pixels that constitute an intracavitary color image that has a plurality of color components, on a target plane according to color components of the pixel correspondence points, the target plane intersecting the origin of a predetermined color space; an axis setting unit which sets a reference axis in the target plane based on pixel correspondence points plotted on the target plane; and an evaluation value calculating unit which calculates a prescribed evaluation value with respect to the captured image based on a positional relationship between the reference axis and the pixel correspondence points.
Systems, Methods, and Apparatus for Aligning Image Frames
Described examples relate to an apparatus comprising a memory for storing image frames and at least one processor. The at least one processor may be configured to receive a plurality of image frames from an image capture device and downsize each of the plurality image frames to generate a plurality of versions of each image frame at a plurality of different sizes. The at least one processor may also be configured to determine alignment information for a first version of a first image frame. The alignment information may include a first alignment vector for identifying image data in a first version of a second image frame that corresponds to image data in the first version of the first image frame. Further, the at least one processor may be configured to determine a first initial alignment vector for identifying image data in a first version of a third image frame based on at least the first alignment vector.
SYSTEMS AND METHODS FOR ADDING PERSISTENCE TO SINGLE PHOTON AVALANCHE DIODE IMAGERY
A system for adding persistence to SPAD imagery is configurable to capture, using a SPAD array, a plurality of image frames. The system is configurable to capture, using an IMU, pose data associated with the plurality of image frames. The pose data includes at least respective pose data associated with each of the plurality of image frames. The system is configurable to determine a persistence term based on the pose data. The system is also configurable to generate a composite image based on the plurality of image frames, the respective pose data associated with each of the plurality of image frames, and the persistence term. The persistence term defines a contribution of each of the plurality of image frames to the composite image.
SYSTEMS AND METHODS FOR ADDING PERSISTENCE TO SINGLE PHOTON AVALANCHE DIODE IMAGERY
A system for adding persistence to SPAD imagery is configurable to capture, using a SPAD array, a plurality of image frames. The system is configurable to capture, using an IMU, pose data associated with the plurality of image frames. The pose data includes at least respective pose data associated with each of the plurality of image frames. The system is configurable to determine a persistence term based on the pose data. The system is also configurable to generate a composite image based on the plurality of image frames, the respective pose data associated with each of the plurality of image frames, and the persistence term. The persistence term defines a contribution of each of the plurality of image frames to the composite image.
Auto-focus tracking for remote flying targets
A system for automatically maintaining focus while tracking remote flying objects includes an interface and processor. The interface is configured to receive two or more images. The processor is configured to determine a bounding box for an object in the two or more images; determine an estimated position for the object in a future image; and determine an estimated focus setting and an estimated pointing direction for a lens system.
Auto-focus tracking for remote flying targets
A system for automatically maintaining focus while tracking remote flying objects includes an interface and processor. The interface is configured to receive two or more images. The processor is configured to determine a bounding box for an object in the two or more images; determine an estimated position for the object in a future image; and determine an estimated focus setting and an estimated pointing direction for a lens system.
Wide-area motion imaging systems and methods
A wide-area motion imaging system provides 360° persistent surveillance with a camera array that is small, light-weight, and operates at low power. The camera array is mounted on a tethered drone, which can hover at heights of up to 400′, and includes small imagers fitted with lenses of different fixed focal lengths. The tether provides power, communication, and a data link from the camera array to a ground processing server that receives, processes and stores the imagery. The server also collects absolute and relative position data from a global positioning system (GPS) receiver and an inertial measurement unit (IMU) carried by the drone. The server uses this position data to correct the rolling shutter effect and to stabilize and georectify the final images, which can be stitched together and shown to a user live or in playback via a separate user interface.
Wide-area motion imaging systems and methods
A wide-area motion imaging system provides 360° persistent surveillance with a camera array that is small, light-weight, and operates at low power. The camera array is mounted on a tethered drone, which can hover at heights of up to 400′, and includes small imagers fitted with lenses of different fixed focal lengths. The tether provides power, communication, and a data link from the camera array to a ground processing server that receives, processes and stores the imagery. The server also collects absolute and relative position data from a global positioning system (GPS) receiver and an inertial measurement unit (IMU) carried by the drone. The server uses this position data to correct the rolling shutter effect and to stabilize and georectify the final images, which can be stitched together and shown to a user live or in playback via a separate user interface.
METHOD AND SYSTEM FOR ESTIMATING MOTION OF REAL-TIME IMAGE TARGET BETWEEN SUCCESSIVE FRAMES
A method of estimating a motion of a real-time image target between successive frames according to an embodiment of the present invention is a method of estimating a motion of a real-time image target between successive frames by a motion estimation application executed by at least one processor of a terminal, including detecting a target object in a first frame image, generating a first frame-down image by downscaling the first frame image, setting a plurality of tracking points TP for the target object in the first frame-down image, obtaining a second frame image consecutive to the first frame image after a predetermined time, generating a second frame-down image by downscaling the second frame image, and tracking the target object in the second frame-down image based on the plurality of tracking points TP.
EVALUATION VALUE CALCULATION DEVICE AND ELECTRONIC ENDOSCOPE SYSTEM
An electronic endoscope system includes a plotting unit which plots pixel correspondence points, which correspond to pixels that constitute an intracavitary color image that has a plurality of color components, on a target plane according to color components of the pixel correspondence points, the target plane intersecting the origin of a predetermined color space; an axis setting unit which sets a reference axis in the target plane based on pixel correspondence points plotted on the target plane; and an evaluation value calculating unit which calculates a prescribed evaluation value with respect to the captured image based on a positional relationship between the reference axis and the pixel correspondence points.