G06V10/443

Multi-Object Tracking For Autonomous Vehicles

A method and system for multi-object tracking is set forth. Object data for boundaries of a plurality of objects are received. Poisson multi-Bernoulli mixture filtering is performed on the object data to form a filtered set of object data. Ultimately, the filtered set of object data is used to control the operation of the vehicle. Identifiers and probabilities are associated with the objects to reduce the set of object data.

SYSTEM AND METHOD FOR EXTRACTING PHYSICAL AND MOTION DATA FROM VIRTUAL ENVIRONMENTS

Embodiments described herein provide a system for analyzing a gameplay of a first video game. During operation, the system can obtain a stream of video frames associated with the gameplay. The system can then analyze the video frames to identify a set of features of the first video game. Here, a respective feature indicates the characteristics of a virtual object in a virtual environment of the first video game supported by a first game engine. Subsequently, the system can derive, based on the set of identified features, a set of parameters indicating one or more physical characteristics of one or more virtual objects in the virtual environment. The system can store the set of derived parameters in a file format readable by a second game engine different from the first game engine. This allows the second game engine to support a second video game that incorporates the physical characteristics.

METHOD FOR GENERATING 3D REFERENCE POINTS IN A MAP OF A SCENE

A method of complementing a map of a scene with 3D reference points including four steps. In a first step, data is collected and recorded based on samples of at least one of an optical sensor, a GNSS, and an IMU. A second step includes initial pose generation by processing of the collected sensor data to provide a track of vehicle poses. A pose is based on a specific data set, on at least one data set re-coded before that dataset and on at least one data set recorded after that data set. A third step includes SLAM processing of the initial poses and collected optical sensor data to generate keyframes with feature points. In a fourth step 3D reference points are generated by fusion and optimization of the feature points by using future and past feature points together with a feature point at a point of processing. This second and fourth steps provides significantly better results than SLAM or VIO methods known from prior art, as the second and the fourth steps are based on recorded data. Wherein a normal SLAM or VIO algorithm only can access data of the past, in these steps, processing may also be done by looking at positions ahead, by using the recorded data.

LOCALIZATION PROCESSING SERVICE
20220398775 · 2022-12-15 ·

Systems, methods, and computer-readable media for providing a localization processing service for enabling localization of a navigation network-restricted subsystem are provided.

IMAGE FEATURE MATCHING METHOD AND RELATED APPARATUS, DEVICE AND STORAGE MEDIUM

In an image feature matching method, at least two images to be matched are acquired; a feature representation of each image to be matched is obtained by performing feature extraction on the image to be matched, wherein the feature representation comprises a plurality of first local features; transforming the first local features into first transformation features having a global receptive field of the images to be matched; and a first matching result of the at least two images to be matched is obtained by matching first transformation features in the at least two images to be matched.

COMPUTER VISION METHOD FOR DETECTING DOCUMENT REGIONS THAT WILL BE EXCLUDED FROM AN EMBEDDING PROCESS AND COMPUTER PROGRAMS THEREOF

A method and computer programs for detecting document regions that will be excluded from a watermark embedding process are disclosed. The method comprises converting, by an adapter module, at least one page of a received document into a visual representation thereof, the visual representation keeping the position of the characters of the at least one page; receiving, by a text detector, the visual representation; processing, by the text detector, the visual representation using one or more artificial intelligence algorithms, and returning a list of invalid regions with their associated page positions as a result, wherein each invalid region of the list of invalid regions may have associated thereto a confidence score; and using, by a watermark embedding module or by a watermark extracting module, the list of invalid regions to provide a watermarked document or a message embedded in the document.

IMAGE PROCESSING METHOD AND ELECTRONIC APPARATUS

The present disclosure provides methods, apparatuses, and computer-readable mediums for image processing. In some embodiments, a method of image processing includes acquiring, from a user, a first image. The method further includes removing, using an image de-filter network, a filter effect applied to the first image to generate a second image. The method further includes obtaining, based on the first image and the second image, an image filter corresponding to the filter effect. The method further includes rendering a third image using the obtained image filter to output a fourth image.

METHODS, SYSTEMS, ARTICLES OF MANUFACTURE, AND APPARATUS TO EXTRACT SHAPE FEATURES BASED ON A STRUCTURAL ANGLE TEMPLATE
20220391630 · 2022-12-08 ·

Methods, systems, articles of manufacture, and apparatus to extract shape features based on a structural angle template are disclosed. An example apparatus includes a template generator to generate a template based on an input image and calculate a template value based on values in the template; a bit slicer to calculate an OR bit slice and an AND bit slice based on the input image, combine the OR bit slice with the AND bit slice to generate a fused image, group a plurality of pixels of the fused image to generate a pixel window, each pixel of the pixel window including a pixel value, and calculate a window value based on the pixel values of the pixel window; and a comparator to compare the template value with the window value and store the pixel window in response to determining the window value satisfies a similarity threshold with the template value.

Method and system for machine learning based segmentation of contrast filled coronary artery vessels on medical images

A computer-implemented method for autonomous segmentation of contrast-filled coronary artery vessels, the method comprising the following steps: receiving (101) an x-ray angiography scan representing a maximum intensity projection of a region of anatomy that includes the coronary vessels on the imaging plane; preprocessing (102) the scan to output a preprocessed scan; and performing autonomous coronary vessel segmentation (103) by means of a trained convolutional neural network (CNN) that is trained to process the preprocessed scan data to output a mask denoting the coronary vessels.

Object Tracking Method and Device, Electronic Device, and Computer-Readable Storage Medium
20220383535 · 2022-12-01 ·

The present disclosure provides an object tracking method, an object tracking device, an electronic device and a computer-readable storage medium, and relates to the field of computer vision technology. The object tracking method includes: detecting an object in a current image, so as to obtain first information about an object detection box, the first information being used to indicate a first position and a first size; tracking the object through a Kalman filter, so as to obtain second information about an object tracking box in the current image, the second information being used to indicate a second position and a second size; performing fault-tolerant modification on a predicted error covariance matrix in the Kalman filter, so as to obtain a modified covariance matrix; calculating a Mahalanobis distance between the object detection box and the object tracking box in the current image in accordance with the first information, the second information and the modified covariance matrix; and performing a matching operation between the object detection box and the object tracking box in the current image in accordance with the Mahalanobis distance.