Patent classifications
B25J9/163
Method and apparatus for generating delivery data models for aerial package delivery
An approach is provided for generating delivery data models for aerial package delivery. The approach involves determining at least one delivery surface data object to represent one or more delivery surfaces of at least one delivery location, wherein the one or more delivery surfaces represents at least one surface upon which to deliver at least one package. The approach further involves causing, at least in part, a creation of at least one complete delivery data model based, at least in part, on the at least one delivery surface data object to represent the at least one delivery location. The approach further involves causing, at least in part, an encoding of at least one geographic address in the at least one complete delivery data model to cause, at least in part, an association of the at least one complete delivery data model with at least one geographic location.
Operating multiple testing robots based on robot instructions and/or environmental parameters received in a request
Methods and apparatus related to receiving a request that includes robot instructions and/or environmental parameters, operating each of a plurality of robots based on the robot instructions and/or in an environment configured based on the environmental parameters, and storing data generated by the robots during the operating. In some implementations, at least part of the stored data that is generated by the robots is provided in response to the request and/or additional data that is generated based on the stored data is provided in response to the request.
Learning device, learning method, learning model, detection device and grasping system
An estimation device includes a memory and at least one processor. The at least one processor is configured to acquire information regarding a target object. The at least one processor is configured to estimate information regarding a location and a posture of a gripper relating to where the gripper is able to grasp the target object. The estimation is based on an output of a neural model having as an input the information regarding the target object. The estimated information regarding the posture includes information capable of expressing a rotation angle around a plurality of axes.
Artificial intelligence server and method for providing information to user
In an artificial intelligence server for providing information to a user, the artificial intelligence server includes a communication unit configured to communicate with a plurality of artificial intelligence apparatuses deployed in a service area and a processor configured to receive at least one of speech data of the user or terminal usage information of the user from at least one of the plurality of artificial intelligence apparatuses, generate intention information of the user based on at least one of the received speech data or the received terminal usage information, generate status information of the user using the plurality of artificial intelligence apparatuses, determine an information providing device among the plurality of artificial intelligence apparatuses based on the generated status information of the user, generate output information to be outputted from the determined information providing device, and transmit a control signal for outputting the generated output information to the determined information providing device.
ROBOT AND METHOD FOR CONTROLLING THEREOF
A robot and a controlling method thereof are provided. The robot includes a memory configured to store at least one instruction; and at least one processor configured to execute the at least one instruction to: based on detecting a user interaction, acquire information on a behavior tree corresponding to the user interaction, and perform an action corresponding to the user interaction based on the information on the behavior tree, wherein the behavior tree includes a node for controlling a dialogue flow between the robot and a user.
ENGINEERING DRAWING REVIEW USING ROBOTIC PROCESS AUTOMATION
A computing device is configured to receive information representing an electronic drawing. The information is processed to locate and access the electronic drawing. Moreover, a template configuration file is accessed that corresponds with a category of the electronic drawing. Using the template configuration file, the at least one computing device can generate an item specification that includes information representing items to be reviewed for compliance with at least one rule. For each respective one of the items in the item specification, the respective item is determined to be present in the electronic drawing and complies with at least one rule identified in the template configuration file. Further, information is generated representing whether the respective item is present in the electronic drawing and complies with at least one rule identified in the template configuration file. The generated information is transmitted to at least one computing device.
Data-driven robot control
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
Robot programming system
A robot programming system according to an aspect of the present disclosure includes: a robot program storage section; a press program storage section; a template program setting section that causes the robot program storage section to store, as an initial version of a robot program, a template program that instructs a robot how to move basically; a model placing section that places three-dimensional models of a workpiece, the robot, and a press machine in a virtual space; a robot movement processing section that causes the three-dimensional model of the robot to move; a press movement processing section that causes the three-dimensional model of the press machine to move; an interference detection section that detects interference between the three-dimensional models; and a robot program modification section that modifies a robot program stored in the robot program storage section to prevent interference detected by the interference detection section.
Object manipulation apparatus, handling method, and program product
An object manipulation apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: calculate, based on an image in which one or more objects to be grasped are contained, an evaluation value of a first behavior manner of grasping the one or more objects; generate information representing a second behavior manner based on the image and a plurality of evaluation values of the first behavior manner; and control actuation of grasping the object to be grasped in accordance with the information being generated.
SYSTEM FOR CHECKING INSTRUMENT STATE OF A SURGICAL ROBOTIC ARM
A surgical robotic system includes: a surgical console having a display and a user input device configured to generate a user input and a surgical robotic arm having a surgical instrument configured to treat tissue and being actuatable in response to the user input; and a video camera configured to capture video data that is displayed on the display. The system also includes a control tower coupled to the surgical console and the surgical robotic arm. The control tower is configured to: process the user input to control the surgical instrument and to record the user input as input data; train a machine learning system using the input data and the video data; and execute the at least one machine learning system to determine probability of failure of the surgical instrument.