B25J9/1671

ROBOT SIMULATION DEVICE
20230032334 · 2023-02-02 · ·

There is provided a robot simulation part device which can facilitate the setting of parameters of force control. A robot simulation device for simulating a force control operation which is performed while bringing a tool part mounted on a robot manipulator into contact with a target workpiece includes a memory which stores a motion program and a force control parameter, which is a set parameter related to the force control operation, and a force control simulation execution part which executes a simulation of the force control operation based on the motion program and the force control parameter, wherein the force control simulation execution part has a virtual force generation part configured to generate, based on position information of the tool part obtained from results of the simulation of the force control operation, a virtual force received by the tool part from the target workpiece in a state in which the tool part is in contact with the target workpiece, and executes the simulation of the force control operation based on the virtual force and a target force set as the force control parameter.

Robotic system with dynamic pack adjustment mechanism and methods of operating same
11491654 · 2022-11-08 · ·

A system and method for operating a robotic system to place objects into containers that have support walls is disclosed. The robotic system may detect an unexpected condition associated with a container during or before a real-time operation. Accordingly, the robotic system may dynamically adjust an existing packing plan based on detecting the unexpected condition.

SETTINGS SUPPORT DEVICE, SETTINGS SUPPORT METHOD, AND PROGRAM
20220347850 · 2022-11-03 ·

A technique allows efficient registration of an accurate gripping position of a robot hand with an auxiliary view appearing on a screen in accordance with the robot hand. A user selects a hand type to be used in gripping a gripping target and designates an auxiliary view to be rendered in accordance with the hand. In response to a two-finger hand being selected (step S11), a plane (step S13), a cylinder (step S14), or a rectangular prism (step S15) is rendered based on the view designated by the user (step S12). In response to a suction hand being selected, a plane is rendered (step S16).

OPERATING AN APPLICATION OF A ROBOT SYSTEM
20230093024 · 2023-03-23 ·

A method for operating an application of a robot system includes selecting a first robot system situation module from a situation module library that comprises a plurality of predefined application-independent robot system situation modules for the robot system, each of which modules maps at least one input signal onto at least one output signal; linking the first robot system situation module to at least one additional selected robot system situation module from the situation module library, and/or to at least one application-class-specific application class situation module that is predefined for a class of a plurality of applications and maps at least one input signal onto at least one output signal, and/or to at least one application-specific application situation module that maps at least one input signal onto at least one output signal, to form a first application situation module that maps the input signals of its linked situation modules onto at least one output signal; and operating the application on the basis of the first application situation module.

METHOD AND SYSTEM FOR FACILITATING REMOTE PRESENTATION OR INTERACTION

A facilitation system for facilitating remote presentation of a physical world includes a first object and an operating environment of the first object. The facilitation system includes a processing system configured to obtain an image frame depicting the physical world, identify a depiction of the first object in the image frame, and obtain a first spatial registration registering an object model with the first object in the physical world. The object model is of the first object. The processing system is further configured to obtain an updated object model corresponding to the object model updated with a current state of the first object, and generate a hybrid frame using the image frame, the first spatial registration, and the updated object model. The hybrid frame includes the image frame with the depiction of the first object replaced by a depiction of the updated object model.

Learning agent categories using agent trajectory clustering

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium for selecting actions for an agent in an environment. In one aspect, a system comprises receiving an agent trajectory that characterizes interaction of an agent with an environment to perform one or more initial tasks in the environment; processing the agent trajectory to generate a classification output that comprises a respective classification score for each agent category in a set of possible agent categories, wherein each possible agent category is associated with a respective task selection policy; classifying the agent as being included in a corresponding agent category based on the classification scores; selecting tasks to be performed by the agent in the environment based on the task selection policy of the corresponding agent category; and transmitting, to the agent, data defining the selected tasks to be performed by the agent in the environment.

OFF-LINE SIMULATION SYSTEM
20230090193 · 2023-03-23 · ·

Provided is an off-line simulation system that enables performance of efficient vision correction training. This vision correction training system for vision correction training is provided with a head mount display capable of displaying an image in a virtual space, and a teaching device communicably connected to the head mount display. The teaching device has: a vision correction unit that, on the basis of a captured image captured by a camera after the position of a workpiece has been moved, performs vision correction on a predetermined movement; and a correction confirmation unit that confirms that the vision correction is appropriately performed on the predetermined movement.

Metal Additive Manufacturing for Value Chain Networks

An information technology system for supporting additive manufacturing and value chain workflows includes a cloud-based metal additive manufacturing management platform including an artificial intelligence system configured to learn on a training set of outcomes, parameters, and data collected from one or more additive manufacturing nodes to optimize additive manufacturing and value chain processes and workflows. The information technology system includes a distributed ledger system configured to store data related to the manufacturing nodes.

CARRYING OUT AN APPLICATION USING AT LEAST ONE ROBOT
20220339787 · 2022-10-27 ·

A method for carrying out an application using at least one robot includes, repeatedly ascertaining a stochastic value of at least one robot parameter and/or at least one environmental model parameter; and carrying out a simulation of the application on the basis of the ascertained stochastic value, training at least one control agent and/or at least one classification agent using the simulations by machine learning, and carrying out the application using the robot. The method may further include configuring a controller of the robot, by means of which the application is carried out wholly or in part, based on the trained control agent, and/or classifying the application using the trained classification agent.

Method, apparatus and system for robotic programming
11607808 · 2023-03-21 · ·

A method, apparatus and a system are disclosed for robotic programming. In at least one embodiment of a method for robotic programming, the method includes receiving, from a controller of a robot, movement parameters reflecting movement of the robot manipulated by a user; making a first data model of a robot move, according to the movement parameters; calculating, upon the first data model touching a second data model of a virtual object, parameters of a first force to be fed back to the user for feeling touch by the robot on a physical object corresponding to the virtual object; and sending the parameters of the first force to the controller of the robot, to drive the robot to feed back the first force to the user.