G06V10/7753

System and method for vehicle occlusion detection
10783381 · 2020-09-22 · ·

A system and method for vehicle occlusion detection is disclosed. A particular embodiment includes: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train a plurality of classifiers, a first classifier being trained for processing static images of the training image data, a second classifier being trained for processing image sequences of the training image data; receiving image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase including performing feature extraction on the image data, determining a presence of an extracted feature instance in multiple image frames of the image data by tracing the extracted feature instance back to a previous plurality of N frames relative to a current frame, applying the first trained classifier to the extracted feature instance if the extracted feature instance cannot be determined to be present in multiple image frames of the image data, and applying the second trained classifier to the extracted feature instance if the extracted feature instance can be determined to be present in multiple image frames of the image data.

Information processing apparatus, information processing method, and storage medium for generating teacher information
10783402 · 2020-09-22 · ·

An information processing apparatus performs estimation processing on supervised data, and stores a relationship between teacher information and an estimation result. When unsupervised data is input, the information processing apparatus searches for supervised data high in degree of similarity in estimation result to unsupervised data, and generates teacher information from an estimation result of unsupervised data based on a relationship between teacher information and an estimation result about the detected supervised data.

SYSTEM AND METHOD FOR LEARNING SENSORY MEDIA ASSOCIATION WITHOUT USING TEXT LABELS

A computer-implemented method of learning sensory media association includes receiving a first type of nontext input and a second type of nontext input; encoding and decoding the first type of nontext input using a first autoencoder having a first convolutional neural network, and the second type of nontext input using a second autoencoder having a second convolutional neural network; bridging first autoencoder representations and second autoencoder representations by a deep neural network that learns mappings between the first autoencoder representations associated with a first modality and the second autoencoder representations associated with a second modality; and based on the encoding, decoding, and the bridging, generating a first type of nontext output and a second type of nontext output based on the first type of nontext input or the second type of nontext input in either the first modality or the second modality.

POWER ELECTRONIC CIRCUIT FAULT DIAGNOSIS METHOD BASED ON OPTIMIZING DEEP BELIEF NETWORK

A fault diagnosis method for power electronic circuits based on optimizing a deep belief network, including steps. (1) Use RT-LAB hardware-in-the-loop simulator to set up fault experiments and collect DC-link output voltage signals in different fault types. (2) Use empirical mode decomposition to extract the intrinsic function components of the output voltage signal and its envelope spectrum and calculate various statistical features to construct the original fault feature data set. (3) Based on the feature selection method of extreme learning machine, remove the redundancy and interference features, as fault sensitive feature data set. (4) Divide the fault sensitive feature set into training samples and test samples, and primitively determine the structure of the deep belief network. (5) Use the crow search algorithm to optimize the deep belief network. (6) Obtain the fault diagnosis result.

LEARNING-BASED 3D MODEL CREATION APPARATUS AND METHOD

Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.

Generator-to-classifier framework for object classification

A computer implemented method, computer system and computer program product are provided for data classification. The method includes receiving original data, wherein the original data includes at least one object in a first condition. The method also includes receiving generated data from a generator based on the original data, wherein the generated data includes the at least one object in a second condition, the generator trained by training data of the first condition and training data of the second condition. The method further includes determining a classification of the at least one object with a classifier based on the original data and the generated data, the classifier trained by labeled data of the first condition and more training data of the second condition that is generated based on the labeled data by the trained generator, wherein the labeled data includes the at least one object.

SYSTEMS, TECHNIQUES, AND INTERFACES FOR OBTAINING AND ANNOTATING TRAINING INSTANCES

A previously trained classification model associated with the machine learning system is configured to process an input to generate i) a first prediction that represents a characteristic associated with the input, and ii) a representation of accuracy associated with the prediction. A retraining subsystem is configured to receive the input, the first prediction, and the representation of accuracy. The retraining subsystem processes the input to generate a prediction representing a characteristic. A sufficiency of certainty of the first prediction is determined based on at least the input, the first prediction, the measure of accuracy, and the second prediction. Based at least on the determined sufficiency the retraining subsystem causes the machine learning system to be automatically retrained, be retrained using the input with active learning or not retrained.

Determining a Lighting Configuration Based on Context
20200229283 · 2020-07-16 · ·

During operation, a computer generates, based at least in part on an initial lighting configuration in an environment and a layout of the environment, and provides a simulation of the environment with the initial lighting configuration. Note that the initial lighting configuration includes one or more lights at predefined or predetermined locations in the environment, dynamic lighting states of the one or more lights, and a dynamic lighting state of the given light includes an intensity and a color of the given light. Moreover, based at least in part on the initial lighting configuration, the layout of the environment, and a determined context of the environment, the computer modifies the initial lighting configuration to obtain an updated lighting configuration. Next, based at least in part on the updated lighting configuration and the layout, the computer generates and selectively provides an updated simulation of the environment with the updated lighting configuration.

LEARNING A LIGHTING PREFERENCE BASED ON A REACTION TYPE

During operation, a computer provides, based at least in part on an initial lighting preference of an individual, instructions specifying initial lighting states of one or more lights in a lighting configuration in an environment, where an initial lighting state of a given light includes an intensity and a color of the given light. Then, the computer receives sensor data specifying a non-verbal physical response of the individual to initial lighting states. Moreover, the computer determines, based at least in part on the non-verbal physical response, a type of reaction of the individual to the initial lighting state. Next, the computer selectively modifies, based at least in part on a lighting behavior history of the individual and the determined type of reaction, the initial lighting preference of the individual to obtain an updated lighting preference.

Dynamic Lighting States Based on Context
20200229289 · 2020-07-16 · ·

During operation, a computer obtains information specifying a lighting configuration of one or more lights in an environment, where the lighting configuration includes the one or more lights at predefined or predetermined locations in the environment. Then, the computer receives sensor data associated with the environment. Moreover, the computer analyzes the sensor data to determine a context associated with the environment. Then, based at least in part on the lighting configuration, a layout of the environment, and the determined context, the computer automatically determines the dynamic lighting states of the one or more lights, where a dynamic lighting state of a given light includes an intensity and a color of the given light. Next, the computer provides instructions corresponding to the dynamic lighting states to the one or more lights. Note that the dynamic lighting states may be based at least in part on a transferrable profile of the individual.