B25J9/163

SYSTEMS AND METHODS FOR AUTOMATED FRAMING CONSTRUCTION
20230226695 · 2023-07-20 ·

Techniques of automated framing for use in the construction of building structures are described. Examples of such structures includes walls, wall panels, roofs, and the like. In one scenario, a robotic automated framing system assists with construction of a building structure. The robotic automated framing system can analyze an architectural plan and determine a project, based at least in part, on the architectural plan. The robotic automated framing system can also schedule a robot to perform the project, and cause the robot to perform at least some of the project.

ROBOT TELEOPERATION CONTROL DEVICE, ROBOT TELEOPERATION CONTROL METHOD, AND STORAGE MEDIUM

A robot teleoperation control device includes a first acquisition unit that acquires operator state information of a state of an operator who operates a robot, an intention estimation unit that estimates an intention of the operator to cause the robot to perform a motion on the basis of the operator state information, a second acquisition unit that acquires at least one of geometric information and dynamic information of the object, an operation method determination unit that determines a method of operating the object based on the estimated motion intention of the operator, and a control amount determination unit that determines a method of operating the robot and force during operation from the information acquired by the second acquisition unit and information determined by the operation method determination unit and reflects the result in a control instruction.

Object handling control device, object handling device, object handling method, and computer program product

An object handling control device includes one or more processors configured to acquire at least object information and status information representing an initial position and a destination of an object; set, when a grasper grasping the object moves from the initial position to the destination, a first region, a second region, and a third region in accordance with the object information and the status information; and calculate a moving route along which the object is moved from the initial position to the destination with reference to the first region, the second region, and the third region.

Substrate transport device and substrate transporting method

A substrate transport device includes an arm, an end effector coupled to the arm, a driver configured to lift the arm so that the end effector receives a substrate, and a controller configured to control an output of the driver to set a lifting speed of the arm. A difference in height between the end effector and the arm is a position difference. A period from when the end effector contacts the substrate until the end effector completes reception of the substrate is a transition period. The controller sets an upper limit value of the lifting speed that decreases an amplitude of one of acceleration or jerk of the position difference in the transition period as compared to before the transition period to an upper limit value of the lifting speed for the transition period.

Annotation device
11559888 · 2023-01-24 · ·

An annotation device includes an image-capturing device, a robot, a control unit, a designation unit, a coordinate processing unit, and a storage unit. The control unit controls the robot so as to acquire a learning image of a plurality of objects, each having a different positional relationship with the image-capturing devices. Furthermore, the storage unit converts a position of the object in a robot coordinate system into a position of the object in an image coordinate system at the time of image-capturing or a position of the object in a sensor coordinate system, and stores the position thus converted together with the learning image.

Robot control for avoiding singular configurations
11559893 · 2023-01-24 · ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for avoiding singular configurations of a robot. A singular configuration of the robot is obtained. A location of an end effector of the robot when the robot is in the singular configuration is determined. For each of a plurality of voxels in a workcell, a distance from the voxel to the location of the end effector when the robot is in the singular configuration is computed. A negative potential gradient of the computed distance is computed. Control rules are generated, wherein the control rules, when followed by the robot, offset the trajectory of the robot according to the negative potential gradient.

Robot system and method of controlling robot system

A robot system includes a manipulating force detector configured to detect a manipulating force given to an operation end by an operator, a reaction-force detector configured to detect a reaction force given to a work end or a workpiece held by the work end, a system controller configured to generate an operating command of a master arm and generate an operating command of a slave arm based on the manipulating force and the reaction force, a master-side control part configured to control the master arm, and a slave-side control part configured to control the slave arm. The system controller has an exaggerated expresser configured to exaggeratedly present an operating feel to the operator who operates the operation end in a reaction-force sudden change state that is a state in which the reaction force changes rapidly with time.

Operation prediction system and operation prediction method
11701772 · 2023-07-18 ·

The automatic operation system includes a plurality of learned imitation models and a model selecting unit. The learned imitation models are constructed by machine learning of operation history data, the operation history data being classified into several groups by an automatic classification system algorithm, the operation history data of each group being learned by the imitation model corresponding to the group. The operation history data include data indicating a surrounding environment and data indicating an operation of an operator in the surrounding environment. The model selecting unit selects one imitation model from several imitation models based on a result of classifying data indicating a given surrounding environment by the automatic classification algorithm of the classification system. The automatic operation system inputs data indicating the surrounding environment to the imitation model selected by the model selecting unit to predict an operation of the operator with respect to the surrounding environment.

Viewpoint invariant visual servoing of robot end effector using recurrent neural network

Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.

Robotic interactions for observable signs of intent

Described herein are assistant robots that anticipate needs of one or more people (or animals). The assistant robots may recognize a current activity, knowledge of the person's routines, and contextual information. As such, the assistant robots can provide or offer to provide appropriate robotic assistance. The assistant robots can learn users' habits or be provided with knowledge regarding humans in its environment. The assistant robots develop a schedule and contextual understanding of the persons' behavior and needs. The assistant robots may interact, understand, and communicate with people before, during, or after providing assistance. The robot can combine gesture, clothing, emotional aspect, time, pose recognition, action recognition, and other observational data to understand people's medical condition, current activity, and future intended activities and intents.