Patent classifications
G06V10/809
Operator Behavior Monitoring System
An operator behavior monitoring system, which includes: an operator behavior recognition system comprising hardware including a processor, a data storage facility and input/output interfaces, the system being configured to implement a set of convolutional neural networks including: an object detection group for detecting an object in the image and to delineate the object from the image; a feature extraction group which extracts features of the object detected by the object detection group; a classifier group which assess the features and classifying the features into a events and which is operable to report the events to a remote server; a server operable to communicate with the operator behavior recognition system for receiving predefined events detected by the operator behavior recognition system; a database in communication with the server, operable to store and retrieve detected operator incidents; and a web frontend, in communication with the server for interfacing with the server
Systems and Methods Using Weighted-Ensemble Supervised-Learning for Automatic Detection of Ophthalmic Disease from Images
Disclosed herein are systems, methods, and devices for classifying ophthalmic images according to disease type, state, and stage. The disclosed invention details systems, methods, and devices to perform the aforementioned classification based on weighted-linkage of an ensemble of machine learning models. In some parts, each model is trained on a training data set and tested on a test dataset. In other parts, the models are ranked based on classification performance, and model weights are assigned based on model rank. To classify an ophthalmic image, that image is presented to each model of the ensemble for classification, yielding a probabilistic classification score—of each model. Using the model weights, a weighted-average of the individual model-generated probabilistic scores is computed and used for the classification.
METHOD FOR TRAINING SMPL PARAMETER PREDICTION MODEL, COMPUTER DEVICE, AND STORAGE MEDIUM
A method for training an SMPL parameter prediction model, including: obtaining a sample picture; inputting the sample picture into a pose parameter prediction model to obtain a predicted pose parameter; inputting the sample picture into a shape parameter prediction model to obtain a predicted shape parameter; calculating model prediction losses according to an SMPL parameter prediction model and annotation information of the sample picture; and updating the pose parameter prediction model and the shape parameter prediction model according to the model prediction losses.
Classifying Time Series Image Data
The present invention extends to methods, systems, and computer program products for classifying time series image data. Aspects of the invention include encoding motion information from video frames in an eccentricity map. An eccentricity map is essentially a static image that aggregates apparent motion of objects, surfaces, and edges, from a plurality of video frames. In general, eccentricity reflects how different a data point is from the past readings of the same set of variables. Neural networks can be trained to detect and classify actions in videos from eccentricity maps. Eccentricity maps can be provided to a neural network as input. Output from the neural network can indicate if detected motion in a video is or is not classified as an action, such as, for example, a hand gesture.
Operator Behavior Recognition System
An operator behavior recognition system comprising hardware including at least one processor, a data storage facility in communication with the processor and input/output interfaces in communication with the processor, the hardware being configured to implement a set of convolutional neural networks (CNNs) including: an object detection group into which at least one image is received from an image source for detecting at least one object in the image and to delineate the object from the image for further processing, at least one of the objects being detected being a face of a person; a facial features extraction group into which the image of the person's face is received and from which facial features from the person's face are extracted; and a classifier group which assess the facial features received from the facial feature extraction group in combination with objects detected by the object detection group to classify predefined operator behaviors.
Line detector for vehicle and method for detecting line for vehicle
A line detector apparatus and method on a vehicle for detecting a line on a road with a higher degree of accuracy. The vehicle includes front, right-side and left-side image capturing sensors mounted on a vehicle and respectively capture an image, including a road surface, at the front and right and left sides of the vehicle to respectively generate front, right-side and left-side images. The line detector includes a processor that calculates a line on a road as a first line from the front image, the line on the road as a second line from the right-side image, and the line on the road as a third line from the left-side image. The processor selects one of multiple mutually-different algorithms based on the first to third lines, and calculates the line on the road based on the first to third lines by using the selected algorithm.
IMAGE DETERMINATION DEVICE, IMAGE DETERMINATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
An image determination device includes feature extractors which output, on the basis of an image to be examined, each piece of feature data indicating a specific feature of the image; a first training part which trains a first determiner so as to output first output data indicating first label data associated with a first training image on the basis of first feature data output when the first training image is input to the feature extractors; a second training part which trains a second determiner so as to output second output data indicating second label data associated with a second training image on the basis of second feature data output when the second training image is input to the feature extractors; and an output part which outputs, on the basis of the first output data and the second output data, output data indicating an overall determination result about an image.
IMAGE PROCESSING METHOD AND APPARATUS USING NEURAL NETWORK
An image processing method and apparatus using a neural network are provided. The image processing method includes generating a plurality of augmented features by augmenting an input feature, and generating a prediction result based on the plurality of augmented features.
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
A system for training a machine learning model, the system comprising a data input unit configured to receive a training data set comprising a plurality of data points and a plurality of targets associated therewith, wherein a subset of the plurality of data points include a protected characteristic; a training unit operable to update a current model configuration of the machine learning model. The training unit comprising a prediction unit configured to receive the training data set as input and output a plurality of predicted scores based on the current model configuration of the machine learning model; and an optimisation unit configured to receive the plurality of predicted scores and subsequently determine an updated model configuration of the machine learning model. The system further comprising a control unit configured to constrain operation of the training unit based at least in part on an estimated relationship between the plurality of predicted scores and the protected characteristic such that the influence of the protected characteristic in a subsequent model configuration of the machine learning model is substantially mitigated.
SYSTEM AND METHOD FOR IDENTIFYING ITEMS
The method for item recognition can include: optionally calibrating a sampling system, determining visual data using the sampling system, determining a point cloud, determining region masks based on the point cloud, generating a surface reconstruction for each item, generating image segments for each item based on the surface reconstruction, and determining a class identifier for each item using the respective image segments.