Patent classifications
B25J9/1697
Method and system for detecting and picking up objects
A method includes steps of: capturing an image of a container; recognizing at least one object in the container based on the image; determining at least one first coordinate set corresponding to the at least one object; determining at least one second coordinate set that corresponds to target one (s) of the at least one first coordinate set and that relates to a fixed picking device of a robotic arm; adjusting position(s) of unfixed picking device(s) of the robotic arm if necessary; controlling the robotic arm to pick up one (s) of the at least one object that correspond(s) to the at least one second coordinate set with the fixed picking device and/or at least one unfixed picking device.
Robot hand and robot system
The robot hand holds a wire harness having a long harness main body and a connector connected to an end of the harness main body. The robot hand includes a fixed holding portion which holds the harness main body a vicinity of the end thereof, a pressing portion movable relative to the fixed holding portion in a longitudinal direction of the harness main body held by the fixed holding portion, and a driving unit which moves the pressing portion in a direction away from the fixed holding portion such that the pressing portion presses the connector outwardly in the longitudinal direction of the harness main body.
Robot and method for recognizing wake-up word thereof
Provided is a robot including a microphone configured to acquire a sound signal corresponding to a sound generated near the robot, a camera, an output interface including at least one of a display configured to output a wake-up screen or a speaker configured to output a wake-up sound when the robot wakes up, and a processor configured to recognize whether the acquired sound includes a voice of a person, activate the camera when the sound includes a voice of a person, recognize whether a person is present in an image acquired by the activated camera, set a wake-up word recognition sensitivity based on a recognition result as to whether a person is present, and recognize whether a wake-up word is included voice data of a user acquired through the microphone based on the set wake-up word recognition sensitivity.
Robot system, method for controlling robot, robot controller, and non-transitory computer-readable storage medium
A robot system includes a robot, a vision sensor, a target position generation circuit, an estimation circuit, and a control circuit. The robot includes an end effector and is configured to work via the end effector on a workpiece which is disposed at a relative position and which is relatively movable with respect to the end effector. The vision sensor is configured to take an image of the workpiece. The target position generation circuit is configured to, based on the image, generate a target position of the end effector at every generation interval. The estimation circuit is configured to, at least based on relative position information related to the relative position, estimate a change amount in the relative position at every estimation interval. The control circuit is configured to control the robot to move the end effector based on the target position and the change amount.
Systems for determining location using robots with deformable sensors
Systems and methods for determining a location of a robot are provided. A method includes receiving, by a processor, a signal from a deformable sensor including data with respect to a deformation region in a deformable membrane of the deformable sensor resulting from contact with a first object. The data associated with contact with the first object is compared, by the processor, to details associated with contact with the first object to information associated with a plurality of objects stored in a database. The first object is identified, by the processor, as a first identified object of the plurality of objects stored in the database. The first identified object is an object of the plurality of objects stored in the database that is most similar to the first object. The location of the robot is determined, by the processor, based on a location of the first identified object.
Robot, measurement fixture, and tool-tip-position determining method
A robot including an arm, a tool attached to the arm, a measurement fixture to be attached to a tip portion of the tool detachably, and a controller that recognizes a reference coordinate system used to control the arm, and that controls the arm. The controller stores data indicating a positional relationship between a tip of the tool and the measurement fixture or data to be used to calculate the positional relationship, and the controller calculates positional coordinates of the tip of the tool in the reference coordinate system based on position data of the measurement fixture and the positional relationship, where the position data is detected by using acquired image data from a visual sensor whose position is associated with the reference coordinate system.
Control device and master slave system
Provided is a control device including a control unit that calculates a first positional relationship between an eye of an observer observing an object displayed on a display unit and a first point in a master-side three-dimensional coordinate system, and controls an imaging unit that images the object so that a second positional relationship between the imaging unit and a second point corresponding to the first point in a slave-side three-dimensional coordinate system corresponds to the first positional relationship.
Artificial intelligence apparatus for cleaning in consideration of user's action and method for the same
An AI robot for cleaning in consideration of a user's action includes a camera to acquire a first image data for the user, a cleaning unit including a suction unit and a mopping unit, a driving unit configured to drive the AI robot, and a processor to determine the user's action using the first image data, determine a cleaning schedule in consideration of the user's action, and control the cleaning unit and the driving unit based on the determined cleaning schedule.
Systems and methods for automated association of product information with electronic shelf labels
Systems and methods that employ an autonomous robotic vehicle (ARV) alone or in combination with a remote computing device during the installation of electronic shelf labels (ESLs) in a facility are discussed. The ARV may detect pre-existing product information from paper labels located on modular units prior to their removal and then detect the location of electronic shelf labels (ESLs) after installation. Pre-existing product information gleaned from the paper labels is associated with the corresponding ESLs. The ARV may also determine compliance or non-compliance of modular units to which an ESL is affixed with a planogram of the facility.
Virtual teach and repeat mobile manipulation system
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.