B25J19/021

Robotic Touch Perception
20220347851 · 2022-11-03 ·

An apparatus such as a robot capable of performing goal oriented tasks may include one or more touch sensors to receive touch perception feedback on the location of objects and structures within an environment. A fusion engine may be configured to combine touch perception data with other types of sensor data such as data received from an image or distance sensor. The apparatus may combine distance sensor data with touch sensor data using inference models such as Bayesian inference. The touch sensor may be mounted onto an adjustable arm of a robot. The apparatus may use the data it has received from both a touch sensor and distance sensor to build a map of its environment and perform goal oriented tasks such as cleaning or moving objects.

PALLETIZING BOXES

A method for palletizing by a robot includes positioning an object at an initial position adjacent to a target object location, tilting the object at an angle relative to a ground plane, shifting the object in a first direction from the initial position toward a first alignment position, shifting the object in a second direction from the first alignment position toward a second alignment position, and releasing the object from the robot to pivot the object toward the target object location.

BRICK/BLOCK LAYING MACHINE INCORPORATED IN A VEHICLE
20220058300 · 2022-02-24 ·

A self-contained truck-mounted brick laying machine can include a frame that can support packs or pallets of bricks placed on a platform. A transfer robot can pick up and move the brick(s). A carousel can be coaxial with a tower. The carousel can transfer the brick(s) via the tower to an articulated and/or telescoping boom. The bricks can be moved along the boom by, e.g., linearly moving shuttles, to reach a brick laying and adhesive applying head. The brick laying and adhesive applying head can mount to an element of the stick, about an axis which is disposed horizontally. The poise of the brick laying and adhesive applying head about the axis can be adjusted and can be set in use so that the base of a clevis of the robotic arm mounts about a horizontal axis, and the tracker component is disposed uppermost on the brick laying and adhesive applying head. The brick laying and adhesive applying head can apply adhesive to the brick and can have a robot that lays the brick. Vision and laser scanning and tracking systems can be provided to allow the measurement of as-built slabs, bricks, the monitoring and adjustment of the process and the monitoring of safety zones. The first, or any course of bricks can have the bricks pre machined by the router module so that the top of the course is level once laid.

ROBOTIC KITCHEN ASSISTANT FOR FRYING INCLUDING AGITATOR ASSEMBLY FOR SHAKING UTENSIL
20220055225 · 2022-02-24 ·

A robotic kitchen assistant for frying includes a robotic arm, a fryer basket, and a robotic arm adapter assembly allowing the robotic arm to pick up and manipulate the fryer basket. The robotic arm adapter includes opposing gripping members to engage the fryer basket. A utensil adapter assembly is mounted to the handle of the fryer basket, and the opposing gripper members are actuated to capture a three-dimensional (3D) feature of the utensil adapter assembly. The robotic arm adapter assembly can include an agitator mechanism to shake the fryer basket or another utensil as desired. Related methods are also described.

MANUAL WORK STATION AND CONTROL UNIT FOR CONTROLLING THE SEQUENCING OF A MANUAL WORK STATION

A manual work station, in particular a manual work station for manufacturing and/or a manual work station for packaging, comprising a work area accessible to a worker, the manual work station having at least one robotic arm, the manual work station having a safety device, which is designed in such a way that the robotic arm cooperates in a contact-free manner with the worker in the work area. The invention furthermore relates to a control unit for controlling the sequencing of a manual work station.

Chemical analyzer

A medical apparatus for analyzing fluid samples includes an outer casing, a slide loading mechanism disposed within the outer casing for loading fluid analysis slides, a slide ejecting mechanism disposed within the outer casing for ejecting fluid analysis slides, an evaporation cap opening mechanism disposed within the outer casing for opening an evaporation cap, an evaporation cap closing mechanism disposed within the outer casing for closing an evaporation cap, a drawer locking mechanism disposed within the outer casing for locking a drawer associated with the outer casing, a camera disposed within the outer casing, and a robot disposed within the outer casing. The robot is movable in three dimensions and has means for conducting three or more of the following operations: slide loading; slide ejecting; evaporation cap opening; evaporation cap closing; drawer locking; and camera manipulation.

Fusing Multiple Depth Sensing Modalities

A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

Automatic calibration for a robot optical sensor

Systems and methods are provided for automatic intrinsic and extrinsic calibration for a robot optical sensor. An implementation includes an optical sensor; a robot arm; a calibration chart; one or more processors; and a memory storing instructions that cause the one or more processors to perform operations that includes: determining a set of poses for calibrating the first optical sensor; generating, based at least on the set of poses, pose data comprising three dimensional (3D) position and orientation data; moving, based at least on the pose data, the robot arm into a plurality of poses; at each pose of the plurality of poses, capturing a set of images of the calibration chart with the first optical sensor and recording a pose; calculating intrinsic calibration parameters, based at least on the set of captured images; and calculating extrinsic calibration parameters, based at least on the set of captured images.

System and method for piece-picking or put-away with a mobile manipulation robot

A method and system for piece-picking or piece put-away within a logistics facility. The system includes a central server and at least one mobile manipulation robot. The central server is configured to communicate with the robots to send and receive piece-picking data which includes a unique identification for each piece to be picked, a location within the logistics facility of the pieces to be picked, and a route for the robot to take within the logistics facility. The robots can then autonomously navigate and position themselves within the logistics facility by recognition of landmarks by at least one of a plurality of sensors. The sensors also provide signals related to detection, identification, and location of a piece to be picked or put-away, and processors on the robots analyze the sensor information to generate movements of a unique articulated arm and end effector on the robot to pick or put-away the piece.

Systems and methods for automated cloud-based analytics for surveillance systems with unmanned aerial devices
09747502 · 2017-08-29 · ·

Systems and methods for cloud-based surveillance for a target surveillance area are disclosed. At least two mobile input capture devices (ICDs) are communicatively connected to a cloud-based analytics platform via a data communication device. At least one user device can access to the cloud-based analytics platform. The cloud-based analytics platform automatically analyzes received 2-Dimensional (2D) video and/or image inputs for generating 3-Dimensional (3D) surveillance data and providing 3D display for a target surveillance area. In one embodiment, the at least two mobile ICDs are Unmanned Aerial Vehicles (UAVs).