Patent classifications
G06V20/10
SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)
A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.
SYSTEM AND METHOD FOR RARE OBJECT LOCALIZATION AND SEARCH IN OVERHEAD IMAGERY
A feature extractor and novel training objective are provided for content-based image retrieval. For example, a computer-implemented method includes applying a query image and a search image to a neural network of a feature extraction network of a computing device, the query image indicating an object to be searched for in the search image. The feature extraction network includes the neural network, a spatial feature neural network receiving a first output of the neural network pertaining to the search image, and an embedding network receiving a second output of the neural network pertaining to the query image. The method includes generating spatial search features from the spatial feature neural network, generating a query feature from the embedding network, applying the query feature to an artificial neural network (ANN) index, and determining an optimal matching result of an object in the search image based on an operation using the ANN index.
SYSTEM AND METHOD FOR RARE OBJECT LOCALIZATION AND SEARCH IN OVERHEAD IMAGERY
A feature extractor and novel training objective are provided for content-based image retrieval. For example, a computer-implemented method includes applying a query image and a search image to a neural network of a feature extraction network of a computing device, the query image indicating an object to be searched for in the search image. The feature extraction network includes the neural network, a spatial feature neural network receiving a first output of the neural network pertaining to the search image, and an embedding network receiving a second output of the neural network pertaining to the query image. The method includes generating spatial search features from the spatial feature neural network, generating a query feature from the embedding network, applying the query feature to an artificial neural network (ANN) index, and determining an optimal matching result of an object in the search image based on an operation using the ANN index.
A METHOD FOR TRAINING A NEURAL NETWORK TO DESCRIBE AN ENVIRONMENT ON THE BASIS OF AN AUDIO SIGNAL, AND THE CORRESPONDING NEURAL NETWORK
A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.
A METHOD FOR TRAINING A NEURAL NETWORK TO DESCRIBE AN ENVIRONMENT ON THE BASIS OF AN AUDIO SIGNAL, AND THE CORRESPONDING NEURAL NETWORK
A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.
CRICKET GAME INTELLIGENT BOT UMPIRE FOR AUTOMATED UMPIRING AND SCORING DECISIONS DURING CRICKET MATCH
The present disclosure is directed to a non-intrusive, integrated system comprising an umpire bot for automatically monitoring, umpiring, scoring, analytics, learning and coaching for players while eliminating need for human umpires and scorers. The automated umpire bot with intelligent telescopic function monitors, cognitively recognizes and captures movements from all equipment's, analyses them, moves up and down and even avoid ball collision travelling towards it. The non-intrusive real time system captures all the game moments right from players initiation, toss of coin, commencement of game, monitoring field positions, keeping scores, umpiring decisions, overs, valid/in-valid deliveries, validating balls per over, wickets, catches, boundaries, sixes and displaying scores and statistics all throughout the game.
CRICKET GAME INTELLIGENT BOT UMPIRE FOR AUTOMATED UMPIRING AND SCORING DECISIONS DURING CRICKET MATCH
The present disclosure is directed to a non-intrusive, integrated system comprising an umpire bot for automatically monitoring, umpiring, scoring, analytics, learning and coaching for players while eliminating need for human umpires and scorers. The automated umpire bot with intelligent telescopic function monitors, cognitively recognizes and captures movements from all equipment's, analyses them, moves up and down and even avoid ball collision travelling towards it. The non-intrusive real time system captures all the game moments right from players initiation, toss of coin, commencement of game, monitoring field positions, keeping scores, umpiring decisions, overs, valid/in-valid deliveries, validating balls per over, wickets, catches, boundaries, sixes and displaying scores and statistics all throughout the game.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
An information processing device according to the present invention includes: a memory; and at least one processor coupled to the memory. The processor performs operations. The operations includes: selecting a base image from a base data set that is a set of images including a target region that includes an object that is a target of machine learning and a background region that does not include an object that is a target of the machine learning; generating a processing target image that is a duplicate of the selected base image; selecting the target region included in another image included in the base data set; synthesizing an image of the selected target region with the processing target image; and generating a data set that is a set of the processing target images in which a predetermined number of the target regions are synthesized.
Robotic Source Detection Device And Method
An autonomous robotic vehicle is capable of detecting, identifying, and locating the source of gas leaks such as methane. Because of the number of operating components within the vehicle, it may also be considered a robotic system. The robotic vehicle can be remotely operated or can move autonomously within a jobsite. The vehicle selectively deploys a source detection device that precisely locates the source of a leak. The vehicle relays data to stakeholders and remains powered that enables operation of the vehicle over an extended period. Monitoring and control of the vehicle is enabled through a software interface viewable to a user on a mobile communications device or personal computer.
SENSOR TRANSFORMATION ATTENTION NETWORK (STAN) MODEL
A sensor transformation attention network (STAN) model including sensors configured to collect input signals, attention modules configured to calculate attention scores of feature vectors corresponding to the input signals, a merge module configured to calculate attention values of the attention scores, and generate a merged transformation vector based on the attention values and the feature vectors, and a task-specific module configured to classify the merged transformation vector is provided.