Patent classifications
G06V10/759
System and Method of Visual Attribute Recognition
A system and method of automatic product attribute recognition receive training images having bounding boxes associated with one or more products in the training images, receive attribute values for each of the one or more products in the training images, and train a first convolutional neural network (CNN) model to generate bounding boxes for and identify each of the one or more products with the training images until the accuracy of the first CNN model is above a first predetermined threshold. The system and method further train a second CNN model for each of the products associated with the cropped images until the second CNN generates attribute values for the one or more attributes with an accuracy above a second predetermined threshold, and automatically recognize the one or more attributes for a new product image by presenting the product image to the first and second CNN models.
System and Method of Attribute Recognition Using Attribute Recognition
A system and method of automatic product attribute recognition receive training images having bounding boxes associated with one or more products in the training images, receive attribute values for each of the one or more products in the training images, and train a first convolutional neural network (CNN) model to generate bounding boxes for and identify each of the one or more products with the training images until the accuracy of the first CNN model is above a first predetermined threshold. The system and method further train a second CNN model for each of the products associated with the cropped images until the second CNN generates attribute values for the one or more attributes with an accuracy above a second predetermined threshold, and automatically recognize the one or more attributes for a new product image by presenting the product image to the first and second CNN models.
Detection and mitigation of unsafe behaviors using computer vision
In some examples, a system can access video data collected from one or more image sensors, the video data showing a region of interest proximate to a machine. The system can execute an object detection model to detect that a person is within the region of interest proximate to the machine based on the video data. The system can detect a motion status of a component of the machine. The system can execute a pose estimation model on the video data to estimate a pose of the person with respect to the machine. The system can detect a safety rule violation based on the pose of the person with respect to the machine, and the motion status of the machine. The system can transmit a signal to a controller of the machine in response to detecting the safety rule violation.
Image processing method and apparatus, computer device, storage medium, and computer program product
An image processing method includes performing additional image feature extraction on a training source face image to obtain a source additional image feature, performing identity feature extraction on the training source face image to obtain a source identity feature, inputting a training template face image into an encoder in a to-be-trained face swapping model to obtain a face attribute feature, inputting the source additional image feature, the source identity feature, and the face attribute feature into a decoder in the face swapping model for decoding to obtain a decoded face image, obtaining a target model loss value based on an additional image difference between the decoded face image and a comparative face image, and adjusting the model parameters of the encoder and the decoder based on the target model loss value to obtain the trained face swapping model.
Biological sample analysis device
An object of the present disclosure is to provide a technique capable of acquiring an analysis target region and color information without causing a decrease in extraction accuracy of the analysis target region due to erroneous extraction of a color of a colored label, measuring a solution volume of the specimen, and determining a specimen type. The biological specimen analysis device according to the present disclosure creates a developed view by cutting out a partial region from a color image of a biological sample tube and connecting the partial region along a circumferential direction of the biological sample tube, and extracts a detection target region from the developed view (see FIG. 6B).
IMAGE RETRIEVAL METHOD, MODEL TRAINING METHOD, APPARATUS, AND STORAGE MEDIUM
This application relates to an image retrieval method, a model training method, an apparatus, and a storage medium. The method may include: segmenting a target region in a to-be-retrieved image through a first neural network, to obtain at least one first feature, where the first feature is an intermediate feature extracted in a process of segmenting the target region; processing the to-be-retrieved image and the at least one first feature through a second neural network, to obtain a feature corresponding to the target region; generating, based on the feature corresponding to the target region, target code corresponding to the to-be-retrieved image; and retrieving a target image from a preset image set based on the target code. In this way, through information exchange between the first neural network and the second neural network, utilization of a feature extracted by the first neural network is improved, effectively improving accuracy of similarity retrieval.
METHOD FOR ARTIFACT REMOVAL OF MEDICAL IMAGE AND DSA SYSTEM
A method for artifact removal of a medical image and a DSA system are provided. The method includes: acquiring an image set of a collected position in a target portion, the image set including a mask image and a contrast image in one-to-one correspondence; performing global registration on the mask image and the contrast image of the collected position, and obtaining a first registration result; performing local registration on the image set after the global registration based on the first registration result, and obtaining a corresponding second registration result; and performing subtraction on the image set after the local registration based on the second registration result, and obtaining a subtraction image of the target portion.
FORGERY DETECTION OF FACE IMAGE
In implementations of the subject matter as described herein, there is provided a method for forgery detection of a face image. Subsequent to inputting a face image, it is detected whether a blending boundary due to the blend of different images exists in the face image, and then a corresponding grayscale image is generated based on a result of the detection, where the generated grayscale image can reveal whether the input face image is formed by blending different images. If a visible boundary corresponding to the blending boundary exists in the generated grayscale image, it indicates that the face image is a forged image; on the contrary, if the visible boundary does not exist in the generated grayscale image, it indicates that the face image is a real image.
DATA CREATION SYSTEM, DATA CREATION METHOD, AND PROGRAM
A data creation system includes a first image acquirer, a second image acquirer, a segmenter, a range generator, and a creator. The first image acquirer acquires a first image representing a first object including a particular part. The second image acquirer acquires a second image representing a second object. The segmenter divides at least one of the first image or the second image into a plurality of regions. The range generator generates, based on a result of segmentation obtained by the segmenter, a single or plurality of range patterns. The creator superposes, in accordance with at least one range pattern belonging to the single or plurality of range patterns, the particular part on the second image to create a single or plurality of superposed images and output the single or plurality of superposed images as learning data.
Autonomous vehicle sensor security, authentication and safety
A method includes obtaining, by a processing device, an impact analysis configuration related to an image sensor operation type for an autonomous vehicle (AV), receiving, by the processing device, image data from a sensing system including at least one image sensor of the AV, causing, by the processing device, fault detection to be performed based on the image data, causing, by the processing device, a fault notification to be generated using the impact analysis configuration, and sending, by the processing device to a data processing system of the AV, the fault notification to perform at least one action to address the fault notification. The fault notification includes a fault summary related to the image sensor operation type.