Patent classifications
G06T2207/30212
DETECTING AND LOCATING BRIGHT LIGHT SOURCES FROM MOVING AIRCRAFT
A method and system for a light source detection system, comprising an aircraft carrying at least one camera. The system includes a database for storing information about the aircraft's motion, direction, and position and ground location information, such as the onboard navigation system database or an remotely accessible database. The system also includes a processor that accesses the database and is connected to the camera. The processor uses image analysis and processing techniques to determine the ground location corresponding to the light source from an image of that light captured by the camera. It determines the path traveled by that light and estimates its location as being a pre-selected distance vertically above the ground along the path traveled by the light to the aircraft when the image of the light was captured.
STITCHED IMAGE
Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.
INTERFERENCE DAMPING FOR CONTINUOUS GAME PLAY
Within a system for operating a virtual reality game or environment, a method is provided for identification of problematic or not well calibrated cameras that are incapable of optimum identification and tracking of game objects. The impact of these identified cameras on the game is reduced on the fly. Most of the time, a few cameras will see an object. The images are mixed to identify the object and its location according to vectors established between cameras and trackable objects. When such mixing of images happens, identification of non-optimum non-calibrated cameras is enabled and performed. The impact of such cameras is then reduced in the overall impact in the game play, providing higher emphasis to images from well calibrated cameras. As such, game stoppage and re-calibration of these few cameras is not required for continuous game play.
Shearogram generation algorithm for moving platform based shearography systems
A system and method are presented for generating shearograms from raw specklegram images which may, for example, be collected from airborne or other mobile shearography equipment. The system and method is used to detect and characterize buried mines, improvised explosive devices (IEDs), and underground tunnels, bunkers, and other structures. Amongst other purposes, the system and method may also be used for rapid scanning of ship hulls and aircraft for hidden structural defects, rapid pipeline inspection, and non-contact acoustic sensing for in-water and underground sources.
BI-LEVEL OPTIMIZATION-BASED INFRARED AND VISIBLE LIGHT FUSION METHOD
The present invention proposes a bi-level optimization-based infrared and visible light fusion method, adopts a pair of infrared camera and visible light camera to acquire images, and relates to the construction of a bi-level paradigm infrared and visible light image fusion algorithm, which is an infrared and visible light fusion algorithm using mathematical modeling. Binocular cameras and NVIDIA TX2 are used to construct a high-performance computing platform and to construct a high-performance solving algorithm to obtain a high-quality infrared and visible light fusion image. The system is easy to construct, and the input data can be acquired by using stereo binocular infrared and visible light cameras respectively; the program is simple and easy to implement; and the fusion image is divided into an image domain and a gradient domain for fusion by means of mathematical modeling according to different imaging principles of infrared and visible light cameras.
SYSTEMS, METHODS, AND DEVICES FOR UNMANNED VEHICLE DETECTION
Systems, methods, and apparatus for detecting UAVs in an RF environment are disclosed. An apparatus is constructed and configured for network communication with at least one camera. The at least one camera captures images of the RF environment and transmits video data to the apparatus. The apparatus receives RF data and generates FFT data based on the RF data, identifies at least one signal based on a first derivative and a second derivative of the FFT data, measures a direction from which the at least one signal is transmitted, analyzes the video data. The apparatus then identifies at least one UAV to which the at least one signal is related based on the analyzed video data, the RF data, and the direction from which the at least one signal is transmitted, and controls the at least one camera based on the analyzed video data.
Stitched image
Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.
AERIAL VIDEO BASED POINT, DISTANCE, AND VELOCITY REAL-TIME MEASUREMENT SYSTEM
A method of determining geo-reference data for a portion of a measurement area includes providing a monitoring assembly comprising a ground station, providing an imaging assembly comprising an imaging device with a lens operably coupled to an aerial device, hovering the aerial device over a measurement area, capturing at least one image of the measurement area within the imaging device, transmitting the at least one image to the ground station using a data transmitting assembly, and scaling the at least one image to determine the geo-reference data for the portion of the measurement area by calculating a size of a field-of-view (FOV) of the lens based on a distance between the imaging device and the measurement area.
Laser-assisted image processing
Unmanned vehicles can be terrestrial, aerial, nautical, or multi-mode. Unmanned vehicles may be used to survey a property in response to or in anticipation of damage to an object. For example, an unmanned vehicle may project a laser pattern and use information associated with the laser pattern to determine characteristics of the object.
SYSTEM AND METHOD FOR EVALUATING CAMOUFLAGE PATTERN BASED ON IMAGE ANALYSIS
According to an embodiment, a system comprises a communication module providing a communication interface, a camouflage pattern evaluation module performing an artificial intelligence-based camouflage pattern evaluation algorithm on an operation environment image and a camouflage pattern image, analyzing a similarity between the operation environment image and the camouflage pattern image, and obtaining an evaluation result of camouflage performance for the camouflage pattern in the operation environment, and a processor deriving a quantitative camouflage performance value for the evaluation result. The artificial intelligence-based camouflage performance evaluation algorithm extracts feature information for the operation environment image and the camouflage pattern image and analyzes the similarity in color, pattern, or structure between the operation environment image and the camouflage pattern image based on the extracted feature information.