Patent classifications
G06V10/806
Imaging Method for Non-Line-of-Sight Object and Electronic Device
Certain embodiments provide an imaging method for a non-line-of-sight object and an electronic device. In certain embodiments, the method includes: detecting a first input operation; and generating first image data in response to the first input operation. The first image data is imaging data of the non-line-of-sight object obtained by fusing second image data and third image data. The first image data includes position information between the non-line-of-sight object and a line-of-sight object. The second image data is imaging data of the line-of-sight object captured by the optical camera. The third image data is imaging data of the non-line-of-sight object captured by the electromagnetic sensor.
OBJECT IDENTIFICATION
Object identification may be provided herein. A feature extractor may extract a first set of visual features, extract a second set of visual features, concatenate the first set of visual features, the second set of visual features, and a set of bounding box information, determine a number of object features and a global feature for a scene, and receive ego-vehicle feature information associated with an ego-vehicle. An object classifier may receive the number of object features, the global feature, and the ego-vehicle feature information, generate relational features with respect to relationships between each of the number of objects from the scene, and classify each of the number of objects from the scene based on the number of object features, the relational features, the global feature, the ego-vehicle feature information, and an intention of the ego-vehicle.
DATABASE MANAGEMENT SYSTEM AND METHOD FOR UPDATING A TRAINING DATASET OF AN ITEM IDENTIFICATION MODEL
A system for updating a training dataset of an item identification model determines that an item is not included in a training dataset. In response to determining that the item is not included in the training dataset, the system obtains an identifier of the item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features associated with the item from the images. The system associates the item to the identifier and the set of features. The system adds a new entry to the training dataset, where the new entry represents the item labeled with the identifier and the set of features.
SYSTEM AND METHOD FOR AGGREGATING METADATA FOR ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
SYSTEM AND METHOD FOR CAPTURING IMAGES FOR TRAINING OF AN ITEM IDENTIFICATION MODEL
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
METHOD FOR RE-RECOGNIZING OBJECT IMAGE BASED ON MULTI-FEATURE INFORMATION CAPTURE AND CORRELATION ANALYSIS
A method for re-recognizing an object image is provided based on multi-feature information capture and correlation analysis weights of an input feature map by using a convolutional layer with a spatial attention mechanism and a channel attention mechanism, causing channel and spatial information to effectively combined, which not only focus on an important feature and suppress an unnecessary feature, but also improve a representation of a feature. A multi-head attention mechanism is used to process a feature after an image is divided into blocks to capture abundant feature information and determine a correlation between features to improve performance and efficiency of object image retrieval. The convolutional layer with the channel attention mechanism and the spatial attention mechanism is combined with a transformer having the multi-head attention mechanism to focus on globally important features and capture fine-grained features, thereby improving performance of re-recognition.
Method and apparatus for vehicle damage assessment, electronic device, and computer storage medium
A method and apparatus for vehicle damage assessment, an electronic device, and a computer-readable storage medium are provided. The method may include: extracting, from an input image, a first feature characterizing a part of a vehicle and a second feature characterizing a damage type of the vehicle; integrating the first feature and the second feature to generate a third feature characterizing a corresponding relation between the part and the damage type; converting the third feature into a characteristic vector; and determining a damage recognition result based on the characteristic vector. According to the technical solution of the disclosure, users can rapidly and accurately learn about the damage condition of the vehicle by providing pictures or videos of the damaged vehicle, thus providing an objective basis for subsequent damage assessment, claim settlement, and repair.
Video recommendation method and device, computer device and storage medium
A video recommendation method is provided, including: inputting a video to a first feature extraction network, performing feature extraction on at least one consecutive video frame in the video, and outputting a video feature of the video; inputting user data of a user to a second feature extraction network, performing feature extraction on the discrete user data, and outputting a user feature of the user; performing feature fusion based on the video feature and the user feature, and obtaining a recommendation probability of recommending the video to the user; and determining, according to the recommendation probability, whether to recommend the video to the user.
System and method for automated diagnosis of skin cancer types from dermoscopic images
Disclosed is a content-based image retrieval (CBIR) system and related methods that serve as a diagnostic aid for diagnosing whether a dermoscopic image correlates to a skin cancer type. Systems and methods according to aspects of the invention use as a reference a set of images of pathologically confirmed benign or malignant past cases from a collection of different classes that are of high similarity to the unknown new case in question, along with their diagnostic profiles. Systems and methods according to aspects of the invention predict what class of skin cancer is associated with a particular patient skin lesion, and may be employed as a diagnostic aid for general practitioners and dermatologists.
Apparatus and method for compensating for error of vehicle sensor
An apparatus and method for compensating for an error of a vehicle sensor for enhancing performance for identifying the same object are provided. The apparatus includes a rotation angle error calculator that calculates a rotation angle error between sensor object information and sensor fusion object information. A position error calculator calculates a longitudinal and lateral position error between the sensor object information and the sensor fusion object information. A sensor error compensator calculates a sensor error based on the calculated rotation angle and a position error. In calculating the rotation angle error, the sensor error compensator corrects an error of the sensor object information based on the rotation angle error, and compensates for the sensor error based on the longitudinal and lateral position error between the corrected sensor object information and the sensor fusion object information.