Patent classifications
G05D1/0253
Method and device for determining the geographic position and orientation of a vehicle
In a method for determining the geographic position and orientation of a vehicle, an image of the vehicle's surroundings is recorded by at least one camera of the vehicle, wherein the recorded image at least partially comprises regions of the vehicle's surroundings on the ground level. Classification information is generated for the individual pixels of the recorded image and indicates an assignment to one of several given object classes, wherein based on this assignment, a semantic segmentation of the image is performed. Ground texture transitions based on the semantic segmentation of the image are detected. The detected ground texture transitions are projected onto the ground level of the vehicle's surroundings. The deviation between the ground texture transitions projected onto the ground level of the vehicle's surroundings and ground texture transitions in a global reference map is minimized. The current position and orientation of the vehicle in space is output based on the minimized deviation.
OBSTRUCTION AVOIDANCE
An obstruction avoidance system may include a agronomy vehicle, at least one sensor configured to output signals serving as a basis for a three-dimensional (3D) point cloud and to output signals corresponding to a two-dimensional (2D) image, and instructions to direct a processor to capture a particular 2D image and to obtain signals serving as a basis for a particular 3D point cloud; identify an obstruction candidate in the particular 2d image; correlate the obstruction candidate to a portion of the particular 3D point cloud to determine a value for a parameter of the obstruction candidate; classify the obstruction candidate as an obstruction based upon the parameter; and output control signals to alter a state of the agronomy vehicle in response to the obstruction candidate being classified as an obstruction.
MOVING TARGET FOLLOWING METHOD, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
A moving target following method, which is executed by one or more processors of a robot that includes a camera and a sensor electrically coupled to the one or more processors, includes: performing a body detection to a body of a target based on images acquired by the camera to obtain a body detection result; performing a leg detection to legs of the target based on data acquired by die sensor to obtain a leg detection result; and fusing the body detection result and the leg detection result to obtain a fusion result, and controlling the robot to follow the target based on the fusion result.
Map creation method of mobile robot and mobile robot
The present disclosure discloses a map creation method of a mobile robot, the mobile robot working indoors, comprising the following steps: S1: obtaining Euler angles of a current point relative to a reference point according to a ceiling image taken from the current point and the reference point; S2: determining whether the roll angle of the Euler angles is lower than a set value, if so, saving the map data of the current point, otherwise, not saving the map data of the current point; S3: returning to step S1 after the mobile robot moves a predetermined distance or for a predetermined time; S4: repeating steps S1 through S3 until the map creation in the working area is complete. The present disclosure also discloses a mobile robot using the above method.
Robot generating map and configuring correlation of nodes based on multi sensors and artificial intelligence, and moving based on map, and method of generating map
Disclosed herein are a robot that generates a map and configures a correlation of nodes, based on multi sensors and artificial intelligence, and that moves based on the map, and a method of generating a map, and the robot according to an embodiment generates a pose graph comprised of LiDAR branch, visual branch, and backbone, and the LiDAR branch includes one or more of the LiDAR frames, the visual branch includes one or more of the visual frames, and the backbone includes two or more frame nodes registered with any one or more of the LiDAR frames or the visual frames, and to generate a correlation between nodes in the pose graph.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AUTONOMOUS TRAVELING ROBOT DEVICE, AND STORAGE MEDIUM
An information processing device that can show an area in which reliable autonomous traveling is possible acquires history information of a position/orientation of a moving body estimated based on a captured image from a camera mounted on the moving body, acquires obstacle arrangement information indicating an arrangement of obstacles in a space where the moving body moves, acquires autonomous traveling possibility information indicating an area in which a setting for causing the moving body to autonomously travel is possible based on the history information, and generates a map image showing the arrangement of obstacles and the area in which autonomous traveling is possible based on the obstacle arrangement information and the autonomous traveling possibility information.
MEASURING METHOD, MEASURING APPARATUS, AND MEASURING SYSTEM
A target-object-encompassing region representing a region causing any change between first image information and second image information is determined based on the first image information precluding a target object in an imageable area and the second image information including the target object in the imageable area. A two-dimensional size of the target object included in the target-object-encompassing region is determined based on the target-object-encompassing region and the height information.
Navigation at alternating merge zones
A navigation system for a host vehicle may include a processing device including circuitry and a memory storing instructions that when executed by the circuitry cause the at least one processing device to receive images acquired by a camera representative of an environment of the host vehicle, and analyze the images to identify a double merge scenario including a first flow of traffic and a second flows of traffic in a same direction that merge to form a merged flow of traffic in a merged lane. The instructions that when executed by the circuitry may further cause the processing device to cause a navigational change in the host vehicle based on a trajectory of a first target vehicle in the first flow of traffic and a trajectory of a second target vehicle in the second flow of traffic.
Using generated markings for vehicle control and object avoidance
A work machine has a backup camera that captures images of an area of a worksite behind the work machine. A controller identifies pre-defined markings in the worksite and localizes the pre-defined markings to the work machine, based on the images. A control signal generator generates control signals to automatically control the work machine based upon the localized markings.
SYSTEMS AND METHODS FOR VEHICLE POSITION CALIBRATION USING RACK LEG IDENTIFICATION AND MAST SWAY COMPENSATION
A materials handling vehicle includes a camera, odometry module, processor, and drive mechanism. The camera captures images of an identifier for a racking system aisle and a rack leg portion in the aisle. The processor uses the identifier to generate information indicative of an initial rack leg position and rack leg spacing in the aisle, generate an initial vehicle position using the initial rack leg position, generate a vehicle odometry-based position using odometry data and the initial vehicle position, detect a subsequent rack leg using a captured image, correlate the detected subsequent rack leg with an expected vehicle position using rack leg spacing, generate an odometry error signal based on a difference between the positions, and update the vehicle odometry-based position using the odometry error signal and/or generated mast sway compensation to use for end of aisle protection and/or in/out of aisle localization.