G06V10/809

Object identification in data relating to signals that are not human perceptible
11314975 · 2022-04-26 · ·

Systems and methods for object identification are provided. In a method, primary data is received. The primary data is generated by a primary sensor that receives signals that are human perceptible and records a scene. Secondary data generated by a secondary sensor that simultaneously records the same scene is received. The secondary sensor receives signals that are not human perceptible. The primary data is processed to identify object signatures relating to objects present in the scene. The processed primary data is used to train a secondary data-based object identification model configured to identify, in the secondary data, object signatures relating to objects present in the scene. A method includes using the secondary data-based object identification model to process the secondary data to identify object signatures relating to objects present in the scene. A method includes augmenting the processed primary data with the processed secondary data.

Computer vision system and method

An image processing method for segmenting an image, the method comprising: receiving an image; processing said image with a common processing stage to produce a first feature map; inputting said first feature map to a parallel processing stage, said second processing stage comprising first and second parallel branches that receive the first feature map; and combining the output of the first and second branches to produce a semantic segmented image, wherein the common processing stage comprises a neural network, the neural network having at least one separable convolution module configured to perform separable convolution and downsample the image to produce first feature map and said first branch comprises a neural network comprising at least one separable convolution module configured to perform separable convolution.

Method for acquiring object information and apparatus for performing same
11314990 · 2022-04-26 · ·

The present invention relates to a method for acquiring an object information, the method comprising: obtaining an input image acquired by capturing a sea; obtaining a noise level of the input image; when the noise level indicates a noise lower than a predetermined level, acquiring an object information related to an obstacle included in the input image from the input image by using a first artificial neural network, and when the noise level indicates a noise higher than the predetermined level, obtaining a noise-reduced image of which the environmental noise is reduced from the input image by using a second artificial neural network, and acquiring an object information related to an obstacle included in the sea from the noise-reduced image by using the first artificial neural network.

ENHANCED OBJECT DETECTION FOR AUTONOMOUS VEHICLES BASED ON FIELD VIEW
20230245415 · 2023-08-03 ·

Systems and methods for enhanced object detection for autonomous vehicles based on field of view. An example method includes obtaining an image from an image sensor of one or more image sensors positioned about a vehicle. A field of view for the image is determined, with the field of view being associated with a vanishing line. A crop portion corresponding to the field of view is generated from the image, with a remaining portion of the image being downsampled. Information associated with detected objects depicted in the image is outputted based on a convolutional neural network, with detecting objects being based on performing a forward pass through the convolutional neural network of the crop portion and the remaining portion.

SYSTEMS AND METHODS FOR FEATURE EXTRACTION AND ARTIFICIAL DECISION EXPLAINABILITY

An automatic target recognizer system including: a database that stores target recognition data including multiple reference features associated with each of multiple reference targets; a pre-selector that selects a portion of the target recognition data based on a reference gating feature of the multiple reference features; a preprocessor that processes an image received from an image acquisition system which is associated with an acquired target and determines an acquired gating feature of the acquired target; a feature extractor and processor that discriminates the acquired gating feature with the reference gating feature and, if there is a match, extracts multiple segments of the image and detects the presence, absence, probability or likelihood of one of multiple features of each of the multiple reference targets; a classifier that generates a classification decision report based on a determined classification of the acquired target; and a user interface that displays the classification decision report.

SYSTEMS AND METHODS FOR AI-ASSISTED SURGERY

Various embodiments of the invention provide systems and methods to assist or guide an arthroscopic surgery or other surgical procedure e.g., surgery of the shoulder, knee or hip. The method comprises steps of receiving an image from an interventional imaging device, identifying a feature in the image using an image recognition algorithm, overlaying the features on a video feed on a display device and making recommendations or suggestions to an operator based on the identified feature in the image.

METHOD AND APPARATUS FOR UPDATING WORKING MAP OF MOBILE ROBOT, AND STORAGE MEDIUM
20220117456 · 2022-04-21 · ·

This application provides a method and an apparatus for updating a working map of a mobile robot, and a storage medium in the field of intelligent control technologies. The method includes: determining a plurality of detected environment maps based on object distribution information detected by the mobile robot in a moving process; merging the plurality of detected environment maps to obtain a merged environment map; and then performing weighting processing on the merged environment map and an environment layout map currently stored in the mobile robot, to obtain an updated environment layout map. The environment layout map stored in the mobile robot is updated by using the plurality of detected maps obtained by the mobile robot during working, so that the updated environment layout map can reflect a more detailed environment layout. This helps the mobile robot subsequently better execute a working task.

IDENTIFYING OBJECTS USING LiDAR
20220122363 · 2022-04-21 ·

Among other things, techniques are described for controlling, using a control circuit, motion of a vehicle based objects identified using LiDAR. For example, respective classes of points of a point cloud are determined, and based on the determined respective classes of the points of the point cloud, objects in the vicinity of the vehicle are identified.

USING FI-RT TO BUILD WINE CLASSIFICATION MODELS
20230243738 · 2023-08-03 ·

Some embodiments of the present disclosure relate to systems and methods including generating, by infrared spectroscopy, spectra data identifying quantities and associated wavelengths of radiation absorption for each of a plurality of wine samples as determined by the infrared spectroscopy; converting the spectra data for each wine sample to a set of discretized data; transforming the discretized data into a visual image representation of each respective wine sample, the visual image representation of each wine sample being an optically recognizable representation of the corresponding converted set of discretized data; and storing a record including the visual image representation of each wine sample in a memory.

IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION METHOD, AND A LEARNING DATA SET GENERATION APPARATUS
20230245422 · 2023-08-03 · ·

In an image recognition apparatus, a processor performs, based on an input image and using an image recognition model, a plurality of object detection processes to detect as an object detection region a region in the input image where a recognition target object is judged to be present. In the plurality of object detection processes, a plurality of mutually different image recognition models are used. The processor generates inference result data according to the degree of overlap among a plurality of object detection regions detected in the plurality of object detection processes.