B25J9/163

Artificial intelligence apparatus for cleaning in consideration of user's action and method for the same
11580385 · 2023-02-14 · ·

An AI robot for cleaning in consideration of a user's action includes a camera to acquire a first image data for the user, a cleaning unit including a suction unit and a mopping unit, a driving unit configured to drive the AI robot, and a processor to determine the user's action using the first image data, determine a cleaning schedule in consideration of the user's action, and control the cleaning unit and the driving unit based on the determined cleaning schedule.

Virtual teach and repeat mobile manipulation system

A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.

TRAINING DATA SCREENING DEVICE, ROBOT SYSTEM, AND TRAINING DATA SCREENING METHOD

A training data screening device includes a data evaluation model, a data evaluator, a memory, and a training data screener. The data evaluation model is constructed by machine learning on at least a part of the collected data, or by machine learning on data different from the collected data. The data evaluator evaluates the input collected data using the data evaluation model. The memory stores the evaluated data, which is the collected data evaluated by the data evaluator. The training data screener screens the training data tier constructing the learning model from the evaluated data stored by the memory by an instruction of an operator to whom an evaluation result of the data evaluator is presented, or automatically screens the training data based on the evaluation result.

CONTROL DEVICE, CONTROL METHOD AND COMPUTER-READABLE STORAGE MEDIUM
20230043637 · 2023-02-09 · ·

A control device 1B includes a preprocessor 21B, a translator 22B and an intention detector 23B. The preprocessor 21B is configured to generate movement signals of a target human 10C subjected to assistance by processing a detection signal Sd outputted by a first sensor which senses the target human 10C. The translator 21B is configured to identify a gesture of the target human 10C by use of the movement signals Sd, the gesture being expressed by a pose and/or movement of the target human 10C. The intention detector 23B is configured to detect an intention of the target human 10C based on history of an event and the identified gesture, the event relating to the assistance.

DEEP REINFORCEMENT LEARNING APPARATUS AND METHOD FOR PICK-AND-PLACE SYSTEM
20230040623 · 2023-02-09 · ·

Disclosed is a deep reinforcement learning apparatus and method for a pick-and-place system. According to the present disclosure, a simulation learning framework is configured to apply reinforcement learning to make pick-and-place decisions using a robot operating system (ROS) in a real-time environment, thereby generating stable path motion that meets various hardware and real-time constraints.

FUSION OF SPATIAL AND TEMPORAL CONTEXT FOR LOCATION DETERMINATION FOR VISUALIZATION SYSTEMS

A computer-implemented method for generating a control signal by locating at least one instrument by way of a combination of machine learning systems on the basis of digital images is described. In this case, the method includes determining parameter values of a movement context by using the at least two digital images and determining an influence parameter value which controls an influence of one of the digital images and the parameter values of the movement context on the input data which are used within a first trained machine learning system, which has a first learning model, for generating the control signal.

Generating a robot control policy from demonstrations collected via kinesthetic teaching of a robot
11554485 · 2023-01-17 · ·

Generating a robot control policy that regulates both motion control and interaction with an environment and/or includes a learned potential function and/or dissipative field. Some implementations relate to resampling temporally distributed data points to generate spatially distributed data points, and generating the control policy using the spatially distributed data points. Some implementations additionally or alternatively relate to automatically determining a potential gradient for data points, and generating the control policy using the automatically determined potential gradient. Some implementations additionally or alternatively relate to determining and assigning a prior weight to each of the data points of multiple groups, and generating the control policy using the weights. Some implementations additionally or alternatively relate to defining and using non-uniform smoothness parameters at each data point, defining and using d parameters for stiffness and/or damping at each data point, and/or obviating the need to utilize virtual data points in generating the control policy.

Self-learning industrial robotic system
11554482 · 2023-01-17 · ·

Example implementations described herein are directed to a simulation environment for a real world system involving one or more robots and one or more sensors. Scenarios are loaded into a simulation environment having one or more virtual robots corresponding to the one or more robots, and one or more virtual sensors corresponding to the one or more virtual system to train a control strategy model from reinforcement learning, which is subsequently deployed to the real world environment. In cases of failure of the real world environment, the failures are provided to the simulation environment to generate an updated control strategy model for the real world environment.

HANDHELD DEVICE FOR TRAINING AT LEAST ONE MOVEMENT AND AT LEAST ONE ACTIVITY OF A MACHINE, SYSTEM AND METHOD

Disclosed herein is a handheld device for training at least one movement and at least one activity of a machine. The handheld device may include a handle, an input unit configured to input activation information for activating the training of the machine, an output unit configured to output the activation information for activating the training of the machine to a device external to the handheld device, and a coupling structure for releasably coupling an interchangeable attachment configured according to the at least one activity.

System and Method for Online Optimization of Sensor Fusion Model

A system and method for collecting data regarding operation of a robot using, at least in part, responses from a first operation model to an input of sensed data from a plurality of sensors. The collected data can be used to optimize the first operation model to generate a second operation model. While the first operation model is being optimized, a train data-driven model that utilizes an end-to-end learning approach can be generated that is based, at least in part, on the collected data. Both the second operation model and the train data-driven model can be evaluated, and, based on such evaluation, a determination can be made as to whether the train data-driven model is reliable. Moreover, based on a comparison of the models, one of the second operation model and the train data-driven model can be selected for validation, and if validated, used in the operation of the robot.