Patent classifications
B25J19/023
SYSTEMS AND METHODS FOR DETECTING WASTE RECEPTACLES USING CONVOLUTIONAL NEURAL NETWORKS
Systems and methods for detecting a waste receptacle, the system including a camera for capturing an image, a convolutional neural network, and processor. The convolutional neural network can be trained for identifying target waste receptacles. The processor can be mounted on the waste-collection vehicle and in communication with the camera and the convolutional neural network configured for using the convolutional neural network. The processor can be configured for using the convolutional neural network to generate an object candidate based on the image; using the convolutional neural network to determine whether the object candidate corresponds to a target waste receptacle; and selecting an action based on whether the object candidate is acceptable.
ROBOTIC SYSTEMS WITH GRIPPING MECHANISMS, AND RELATED SYSTEMS AND METHODS
Robotic systems with variable gripping mechanisms, and related systems and methods are disclosed herein. In some embodiments, the robotic system includes a robotic arm and an object-gripping assembly coupled to the robotic arm. The object-gripping assembly can include a main body coupled to the robotic arm through an external connector on an upper surface of the main body and a vacuum operated gripping component coupled to a lower surface of the main body. The object-gripping assembly can also include a variable-width gripping component coupled to the main body. The variable-width gripping component is movable between a fully folded state, a plurality of extended states, and a clamping state to grip a variety of target objects of varying shapes, sizes, weights, and orientations.
SYSTEM AND METHOD FOR ROBOTIC OBJECT PLACEMENT
A computing system including a processing circuit in communication with a robot and a camera having a field of view. The processing circuit obtains image information based on the objects in the field of view and a loading environment, the loading environment which includes loading areas, an object queue, and a buffer zone. The computing system is configured to use the obtained image information in motion planning operations for the retrieval and placement of objects from the object queue into the loading environment. Pallets provided within the loading environment (i.e., within the loading areas) are dedicated to receiving objects having corresponding object type identifiers. The computer system further uses the image information to determine the fill status of pallets existing within the loading environment, and whether new pallets need to be brought into the loading environment and/or swapped out with existing pallets to account for future planning and placement operations.
AUTONOMOUSLY NAVIGATING ROBOT CAPABLE OF CONVERSING AND SCANNING BODY TEMPERATURE TO HELP SCREEN FOR COVID-19 AND OPERATION SYSTEM THEREOF
This application relates to an autonomously navigating robot. In one aspect, the robot includes an end effector configured to measure a person's body temperature and, when the body temperature exceeds a standard fever temperature, activate a chatbot to check symptoms of Covid-19. The robot may also include a manipulator configured to align the end effector with the person's forehead. The robot may further include a mobile robot configured to detect the person and move the end effector and the manipulator to a position where the person is located by performing autonomous navigation.
Visual annotations in robot control interfaces
Methods, apparatus, systems, and computer-readable media are provided for visually annotating rendered multi-dimensional representations of robot environments. In various implementations, an entity may be identified that is present with a telepresence robot in an environment. A measure of potential interest of a user in the entity may be calculated based on a record of one or more interactions between the user and one or more computing devices. In some implementations, the one or more interactions may be for purposes other than directly operating the telepresence robot. In various implementations, a multi-dimensional representation of the environment may be rendered as part of a graphical user interface operable by the user to control the telepresence robot. In various implementations, a visual annotation may be selectively rendered within the multi-dimensional representation of the environment in association with the entity based on the measure of potential interest.
Robot hand and robot system
The robot hand holds a wire harness having a long harness main body and a connector connected to an end of the harness main body. The robot hand includes a fixed holding portion which holds the harness main body a vicinity of the end thereof, a pressing portion movable relative to the fixed holding portion in a longitudinal direction of the harness main body held by the fixed holding portion, and a driving unit which moves the pressing portion in a direction away from the fixed holding portion such that the pressing portion presses the connector outwardly in the longitudinal direction of the harness main body.
Robot and method for recognizing wake-up word thereof
Provided is a robot including a microphone configured to acquire a sound signal corresponding to a sound generated near the robot, a camera, an output interface including at least one of a display configured to output a wake-up screen or a speaker configured to output a wake-up sound when the robot wakes up, and a processor configured to recognize whether the acquired sound includes a voice of a person, activate the camera when the sound includes a voice of a person, recognize whether a person is present in an image acquired by the activated camera, set a wake-up word recognition sensitivity based on a recognition result as to whether a person is present, and recognize whether a wake-up word is included voice data of a user acquired through the microphone based on the set wake-up word recognition sensitivity.
Systems and methods for privacy management in an autonomous mobile robot
A method of operating a mobile cleaning robot can include receiving a privacy mode setting from a user interface, where the privacy mode setting is based on a user selection between at least two different privacy mode settings for determining whether to operate the mobile cleaning robot in an image-capture-restricted mode. An image stream of an image capture device of the mobile cleaning robot is permitted in an absence of a user-selection of a more restrictive one of the privacy settings. At least a portion of the image stream is restricted or disabled based at least in part on a user-selection of a more restrictive one of the privacy settings.
SYSTEM AND/OR METHOD FOR ROBOTIC FOODSTUFF ASSEMBLY
The foodstuff assembly system can include: a robot arm, a frame, a set of foodstuff bins, a sensor suite, a set of food utensils, and a computing system. The system can optionally include: a container management system, a human machine interface (HMI). However, the foodstuff assembly system 100 can additionally or alternatively include any other suitable set of components. The system functions to enable picking of foodstuff from a set of foodstuff bins and placement into a container (such as a bowl, tray, or other foodstuff receptacle). Additionally or alternatively, the system can function to facilitate transferal of bulk material (e.g., bulk foodstuff) into containers, such as containers moving along a conveyor line.
GENERATION OF IMAGE FOR ROBOT OPERATION
A robot control system includes circuitry configured to: generate a command to a robot; receive a frame image in which a capture position changes according to a motion of the robot based on the command; extract a partial region from the frame image according to the command; superimpose a delay mark on the partial region to generate an operation image; and display the operation image on a display device, so as to represent a delay of the motion of the robot with respect to the command.