Patent classifications
G06T2207/30212
IDENTIFYING, TRACKING, AND DISRUPTING UNMANNED AERIAL VEHICLES
Systems, methods, and apparatus for identifying, tracking, and disrupting UAVs are described herein. A tracking system can receive sensor data associated with an object in a particular airspace from one or more radio frequency sensors. The tracking system can analyze the sensor data relating to the object to identify a type of RF signal being used by the object. A portable countermeasure device can generate one or more disruption signals on one or more targeted bands of spectrum based on the type of RF signal being used by the object.
THROWING POSITION ACQUISITION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
A throwing position acquisition method and apparatus, a computer device, and a storage medium. The method includes: acquiring image frames of a target video; acquiring a projectile position in each image frame; acquiring a trajectory starting point position of the target object based on the projectile position in each image frame; acquiring, based on projectile positions corresponding to at least one group of image frames in the image frames, a first height value corresponding to a case that the target object is thrown; and acquiring a throwing position of the target object based on the first height value and the trajectory starting point position of the target object.
POSE DETECTION OF AN OBJECT IN A VIDEO FRAME
Aspects of the disclosure provide solutions for determining a position of an object in a video frame. Examples include: receiving a segmentation mask of an identified object in a video frame; adjusting a 3D representation of a moveable part of the object based on constraints for the moveable part; comparing the 3D model of the object to the segmentation mask of the object; determining a match between the 3D model of the object to the segmentation mask of the object is above a threshold; and based on the match being above the threshold, determining a position of the object.
Operating light sources to project patterns for disorienting visual detection systems
Methods and systems fort operating one or more light sources to project adversarial patterns generated to disorient a machine learning based detection system, comprising generating one or more adversarial patterns configured to disorient the machine learning based detection system and operating one or more light sources configured to project one or more of the adversarial pattern(s) in association with the targeted object in order to disorient the machine learning based detection system.
SYSTEMS AND METHODS FOR DETERMINING A POSITION OF A SENSOR DEVICE RELATIVE TO AN OBJECT
A method and system to determine the position of a moveable platform relative to an object is disclosed. The method can include storing one or more synthetic models each trained by one of the one or more synthetic model datasets corresponding to one or more objects in a database; capturing an image of the object by one or more sensors associated with the moveable platform; identifying the object by comparing the captured image of the object to the one or more synthetic model datasets; generating a first model output using a first synthetic model of the one or more synthetic models, the first model output including a first relative coordinate position and a first spatial orientation of the moveable platform; and generating a platform coordinate output and a platform spatial orientation output of the moveable platform at the first position based on the first model output.
FIRING CUTOUT RAPID GENERATION AIDED BY MACHINE LEARNING
A system includes and maintains a machine learning algorithm. The machine learning algorithm is trained to identify non-targets in an environment. The system receives an image of the environment, and identifies the non-targets in the image using the trained machine learning algorithm. The system then generates a firing cut out map for overlaying on the image of the environment based on the identified non-targets in the image of the environment.
Method and Device for Passive Ranging by Image Processing
The device (1) for estimating the distance between an observer (2) and a target (4) using at least one image generated by a digital image generator (6) from the position (P1) of the observer (2) comprises a detection and identification unit configured to detect and identify a target (4) in the image at least from a predetermined list of known targets, an orientation estimation unit configured to determine the orientation of the identified target in the image, and a distance estimation unit configured to measure, in the image, the length of at least one characteristic segment of the target and to calculate the distance from said measured length, an actual length of the segment on the target taking into account the orientation of the target, and the spatial resolution of the digital image generator (6).
METHODS AND APPARATI FOR INTENSIFIED VISUAL-INERTIAL ODOMETRY
Aspects of the present disclosure include methods and systems for receiving a first image, amplifying a luminous intensity of the first image to generate a second image, digitally capturing the second image, identifying a first time associated with receiving the first image or capturing the second image, acquiring spatial information of the imaging system, identifying a second time associated with acquiring the spatial information, associating the second image with the spatial information based on the first time being substantially contemporaneous with the second time, and generating at least one of a position or an orientation of the imaging system based on the second image and the spatial information.
Method and Device for Passive Ranging by Image Processing and Use of Three-Dimensional Models
The device (1) for estimating the distance between an observer (2) and a target (4) using at least one image generated by a digital image generator (6) from the position (P1) of the observer (2) comprises a detection and identification unit configured to detect and identify a target (4) in the image and define an imaged representation of the target (4), and a distance estimation unit configured to perform, on the image, multiple different projections of a three-dimensional model of the target (4) to obtain projected representations of the target (4) and to select the projected representation most similar to the imaged representation, the distance associated with the selected projected representation representing the estimated distance (D) between the position of the observer (2) and the target (4).
Identifying, tracking, and disrupting unmanned aerial vehicles
Systems, methods, and apparatus for identifying, tracking, and disrupting UAVs are described herein. Sensor data can be received from one or more portable countermeasure devices or sensors. The sensor data can relate to an object detected proximate to a particular airspace. The system can analyze the sensor data relating to the object to determine a location of the object and determine that the object is flying within the particular airspace based at least in part on location data. A portable countermeasure device can be identified that corresponds to the location of the object. The system can transmit information about the object to the identified portable countermeasure device. The portable countermeasure device can transmit additional data relating to the object to the system.