Patent classifications
G05D1/2435
Systems for setting and programming zoning for use by autonomous modular robots
A modular robot is provided. The modular robot includes a sweeper module having a container for collecting debris from a surface of a location. The sweeper module is coupled to one or more brushes for contacting the surface and moving said debris into said container. Included is a robot module having wheels and configured to couple to the sweeper module. The robot module is enabled for autonomous movement and corresponding movement of the sweeper module over the surface. A controller is integrated with the robot module and interfacing with the sweeper module. The controller is configured to execute instructions for assigning of at least two zones at the location and assigning a work function to be performed using the sweeper module at each of the at least two zones. The controller is further configured for programming the robot module to activate the sweeper module in each of the two zones. The assigned work function is set for performance at each of the at least two zones. The work function can be to sweep, to scrub, to polish, to mow or to perform different work functions over zones of a location, and providing remote access to view real-time operation of the modular robot, and to program zones and other control parameters of the modular robot.
3-D image system for vehicle control
A control system uses visual odometry (VO) data to identify a position of the vehicle while moving along a path next to the row and to detect the vehicle reaching an end of the row. The control system can also use the VO image to turn the vehicle around from a first position at the end of the row to a second position at a start of another row. The control system may detect an end of row based on 3-D image data, VO data, and GNSS data. The control system also may adjust the VO data so the end of row detected from the VO data corresponds with the end of row location identified with the GNSS data.
Robot generating map based on multi sensors and artificial intelligence and moving based on map
Disclosed herein is a robot generating a map based on multi sensors and artificial intelligence and moving based on the map, the robot according to an embodiment including a controller generating a pose graph that includes a LiDAR branch including one or more LiDAR frames, a visual branch including one or more visual frames, and a backbone including two or more frame nodes registered with any one or more of the LiDAR frames or the visual frames, and generating orodometry information that is generated while the robot is moving between the frame nodes.
Self-supervised attention learning for depth and motion estimation
A system includes: a depth module including an encoder and a decoder and configured to: receive a first image from a first time from a camera; and based on the first image, generate a depth map including depths between the camera and objects in the first image; a pose module configured to: generate a first pose of the camera based on the first image; generate a second pose of the camera for a second time based on a second image; and generate a third pose of the camera for a third time based on a third image; and a motion module configured to: determine a first motion of the camera between the second and first times based on the first and second poses; and determine a second motion of the camera between the second and third times based on the second and third poses.
Object detection via comparison of synchronized pulsed illumination and camera imaging
An image processing system may comprise a global shutter camera, an illumination emitter, and a processing system comprising at least one processor and memory. The processing system may be configured to control the image processing system to: control the illumination emitter to illuminate a scene; control the global shutter camera to capture a sequence of images of the scene, wherein the captured sequence of images includes images that are captured without illumination of the scene by the illumination emitter and images that are captured while the scene is illuminated by the illumination emitter; and determine presence of an object in the scene based on comparison of the images captured without illumination of the scene and images captured with illumination of the scene.
DISINFECTION ROBOT AND CONTROLLING METHOD THEREOF
Disclosed herein is a disinfection robot. The disinfection robot includes a body provided with an outlet, a fan provided inside the body, a fan motor configured to rotate the fan, a wheel provided under the body, a wheel motor configured to rotate the wheel, a three-dimensional camera having a forward field of view of the body and configured to capture a three-dimensional image, and a processor configured to control the fan motor to rotate the fan to discharge air through the outlet and control the wheel motor to rotate the wheel to move the body based on the three-dimensional image.
INFORMATION PROCESSING APPARATUS, MOVING BODY CONTROL SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
The information processing apparatus includes an extraction unit configured to extract at least one first photographed image, the first photographed image being similar to a second photographed image which is photographed by using a second photographing apparatus mounted on a moving apparatus, the moving apparatus being moving in a predetermined region, from a plurality of first photographed images which are photographed by using a first photographing apparatus for creating a three-dimensional image of the predetermined region, an estimation unit configured to estimate a first photographing orientation of the first photographing apparatus, the first photographing apparatus photographing the first photographed image which is extracted, and a second photographing orientation of the second photographing apparatus at a time when the second photographed image is photographed, and a control unit configured to control an action of the moving apparatus based on the first photographing orientation and the second photographing orientation.
Vehicle navigation based on aligned image and LIDAR information
Systems and methods are provided for navigating an autonomous vehicle. In one implementation, a navigational system for a host vehicle may include at least one processor programmed to: receive a stream of images captured by a camera onboard the host vehicle, wherein the captured images are representative of an environment surrounding the host vehicle; and receive an output of a LIDAR onboard the host vehicle, wherein the output of the LIDAR is representative of a plurality of laser reflections from at least a portion of the environment surrounding the host vehicle. The at least one processor may also be configured to determine at least one indicator of relative alignment between the output of the LIDAR and at least one image captured by the camera; attribute LIDAR reflection information to one or more objects identified in the at least one image based on the at least one indicator of the relative alignment between the output of the LIDAR and the at least one image captured by the camera; and use the attributed LIDAR reflection information and the one or more objects identified in the at least one image to determine at least one navigational characteristic associated with the host vehicle.
Predicting and responding to cut in vehicles and altruistic responses
A vehicle navigation system may comprise a memory including instructions and circuitry configured by the instructions to identify a target vehicle in an environment of a vehicle that includes the vehicle navigation system. The circuitry may receive image data of the target vehicle from an image capture device of the vehicle; identify, based on analysis of the image data, a situational characteristic of the target vehicle including an indication that the target vehicle is traveling behind an additional vehicle traveling slower than the target vehicle; and change a navigational state of the vehicle to allow an action of the target vehicle. The vehicle may be configured to cause the change in the navigational state based on a determination that the situational characteristic indicates that the target vehicle would benefit from the change in the navigational state.
Mobile robot system and method for generating map data using straight lines extracted from visual images
A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.