G05B2219/33038

Generating a model for an object encountered by a robot

Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.

Model Update Device, Method, and Program

An acquisition unit (11) acquires an explanatory variable that is to be input to a model (37) configured to output an objective variable for the explanatory variable, a specification unit (12) associates a frequency at which an explanatory variable included in each of a plurality of areas, which are obtained by dividing an explanatory variable space, is acquired by the acquisition unit (11) with each of the plurality of areas, and specifies an area to which an explanatory variable included in learning data used to learn the model (37) belongs and in which a frequency of an explanatory variable acquired by the acquisition unit (11) is a predetermined value or less, and an update unit (14) updates the model (37) in such a manner that learning data including an explanatory variable belonging to an area specified by the specification unit (12) is forgotten.

SYSTEM AND METHOD FOR INDUSTRIAL PROCESS CONTROL AND AUTOMATION SYSTEM OPERATOR EVALUATION AND TRAINING

A method includes obtaining at least one model associating areas of competency with job roles and job responsibilities of personnel, the at least one model also associating the areas of competency with curricula of training exercises and content. The method also includes obtaining a library of intervention assets associated with the areas of competency, the intervention assets comprising content for training personnel in at least one of the areas of competency. The method further includes evaluating a trainee to determine a competency gap analysis of the trainee, the competency gap analysis comprising a competency gap associated with job responsibilities of the trainee, the competency gap identifying at least one of the areas of competency in which the trainee requires training. In addition, the method includes providing web-based training to the trainee based on the competency gap, the training comprising at least one intervention asset and at least one intervention activity.

GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT

Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.

MACHINE LEARNING DEVICE, INDUSTRIAL MACHINE CELL, MANUFACTURING SYSTEM, AND MACHINE LEARNING METHOD FOR LEARNING TASK SHARING AMONG PLURALITY OF INDUSTRIAL MACHINES
20170243135 · 2017-08-24 ·

A machine learning device, which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, includes a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit.

Generating a model for an object encountered by a robot

Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.

CONDITION MONITORING OF AN ELECTRIC POWER CONVERTER

A computer-implemented method of providing a machine learning model for condition monitoring of an electric power converter is provided. The method includes: obtaining a first batch of input data that includes a number of samples of one or more operating parameters of the converter during at least one operating state of the converter; reducing the number of samples of the first batch by clustering the samples of the first batch into a first set of clusters, (e.g., according to a first clustering algorithm, e.g., based on a clustering feature tree, such as BIRCH), and determining at least one representative sample for each cluster; providing the representative samples for training the machine learning model; and/or training the machine learning model based on the representative samples.

GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT

Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.

Machine learning device, industrial machine cell, manufacturing system, and machine learning method for learning task sharing among plurality of industrial machines

A machine learning device, which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, includes a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit.

Learning and applying empirical knowledge of environments by robots
11042783 · 2021-06-22 · ·

Techniques described herein relate to generating a posteriori knowledge about where objects are typically located within environments to improve object location. In various implementations, output from vision sensor(s) of a robot may include visual frame(s) that capture at least a portion of an environment in which a robot operates/will operate. The visual frame(s) may be applied as input across a machine learning model to generate output that identifies potential location(s) of an object of interest. The robot's position/pose may be altered based on the output to relocate one or more of the vision sensors. One or more subsequent visual frames that capture at least a not-previously-captured portion of the environment may be applied as input across the machine learning model to generate subsequent output identifying the object of interest. The robot may perform task(s) that relate to the object of interest.