Patent classifications
G05D1/2465
THREE-DIMENSIONAL SIMULATION METHOD AND THREE-DIMENSIONAL SIMULATION APPARATUS
Disclosed are a three-dimensional (3D) simulation method and a 3D simulation apparatus. The 3D simulation method disclosed herein includes a step (S20) of separating point cloud data including a road, an obstacle, and a cargo transportation route into road data and obstacle data, a step (S40) of converting the road data into road mesh data, and converting the obstacle data into obstacle mesh data, a step (S60) of constructing a virtual environment by merging the road mesh data with the obstacle mesh data, a step (S80) of loading a specific cargo transportation route to the virtual environment, a step (S100) of loading a 3D transport truck and a 3D cargo to a predetermined point of the cargo transportation route, and a step (S120) of performing a route survey simulation while virtually driving the 3D transport truck and the 3D cargo along the cargo transportation route.
Self-localizing system operative in an unknown environment
A system configured to operate in an unknown, possibly texture-less environment, with possibly self-similar surfaces, and comprising a plurality of platforms configured to operate as mobile platforms, where each of these platforms comprises an optical depth sensor, and one platform operates as a static platform and comprising at least one optical projector. Upon operating the system, the static platform projects a pattern onto the environment, wherein each of the mobile platforms detects the pattern or a part thereof by its respective optical depth sensor while moving, and wherein information obtained by the optical depth sensors, is used to determine moving instructions for mobile platforms within that environment. Optionally, the system operates so that every time period another mobile platform from among the plurality of platforms, takes the role of operating as the static platform, while the preceding platform returns to operate as a mobile platform.
System and method for autonomous inspection for asset maintenance and management
A method for performing an autonomous inspection. The method comprises traversing, by an autonomous sensor apparatus, a path through a site having three-dimensional objects located therein. The site includes three-dimensional objects located therein. The method comprises obtaining, by a plurality of sensors on-board the autonomous sensor apparatus, one or more data sets throughout the path. Each of the one or more data sets are associated with an attribute of one or more three-dimensional objects. The method comprises generating, by the first, second, or third processor, a working model from a collocated data set; and comparing, by the first, second, or third processor, the working model with one or more pre-existing models; to determine the presence and/or absence of anomalies. The presence and/or absence of anomalies are communicated as human-readable instructions.
ENHANCED OBJECT DETECTION FOR AUTONOMOUS VEHICLES BASED ON FIELD VIEW
Systems and methods for enhanced object detection for autonomous vehicles based on field of view. An example method includes obtaining an image from an image sensor of one or more image sensors positioned about a vehicle. A field of view for the image is determined, with the field of view being associated with a vanishing line. A crop portion corresponding to the field of view is generated from the image, with a remaining portion of the image being downsampled. Information associated with detected objects depicted in the image is outputted based on a convolutional neural network, with detecting objects being based on performing a forward pass through the convolutional neural network of the crop portion and the remaining portion.
Apparatus, system, and method of providing hazard detection and control for a mobile robot
An apparatus, system and method capable of providing an autonomous mobile robot hazard detection and control system. The apparatus, system and method may include: a robot having a robot body; a plurality of sensors physically associated with the robot body, and capable of detecting a hazardous condition in an operational environment; and at least one processing system at least partially physically associated with the robot body and communicatively connected to the plurality of sensors. The at least one processing system may include non-transitory computing code which, when executed by a processor of the at least one processing system, causes to occur the steps of: mapping a navigation path for the robot to traverse; detecting the hazardous condition along the navigation path based on output from the plurality of sensors; and instructing at least one action by the robot other than following the navigation path, wherein the at least one action at least partially addresses the hazardous condition.
Efficient map matching method for autonomous driving and apparatus thereof
A map matching method for autonomous driving includes extracting a first statistical map from 3D points contained in 3D map data; extracting a second statistical map from 3D points of surroundings which are obtained by a detection sensor simultaneously or after the previous extracting of the statistical map; dividing the second statistical map into a vertical-object part and a horizontal-object part; and performing map matching using the horizontal-object part and/or the vertical-object part and the first statistical map.
Transport robot and its control method
A transportation robot 1 is configured to transport an item in a warehouse and includes: a sensor 50 configured to detect a three-dimensional shape of an object; and a controller 60 controlling a transportation operation of the transportation robot 1, the controller 60 comparing master data indicating a three-dimensional shape of a structure 200 located at a local spot in the warehouse and detection data indicating a three-dimensional shape of the structure 200 detected by the sensor 50 to identify the structure 200.
METHOD FOR MANAGING AN AGRICULTURAL VEHICLE DURING HARVESTING PROCESS IN A PLANTATION OF FRUIT TREES, SUCH AS AN ORANGE ORCHARD
A method for managing an agricultural vehicle during harvesting process in a plantation of fruit trees, such as an orange orchard, the vehicle being shaped as a portal capable of moving over a crop row and provided with a couple of rotors (R) arranged to work simultaneously on both opposite sides of each plant of the crop row, the method including process of recognition of the trunks of the trees of a plantation to be worked by means of a first 3D sensor (3DS) system interfaced with processing means (CPU) of an agricultural vehicle (V), wherein the first 3D sensor (3DS) system includes a couple of 3D sensors arranged in a low portion of the agricultural vehicle at opposite sides of the vehicle oriented such that to converge in common point (P) circa on a vehicle center line axis (VC) in front of the vehicle, the process including fitting of pseudo-ellipsoids in a horizontal slice of a merged point cloud generated by the two 3D sensors, in order to identify trunks of the crop row.
METHOD AND SYSTEM FOR AUTONOMOUS EXPLORATION AND SCANNING
A computer-implemented method for autonomously exploring, by a mobile robot, one or more objects of interest, the mobile robot comprising a computing unit and a laser scanner module for scanning surfaces of the one or more objects of interest, the laser scanner module having a field of view. The method comprising defining a 3D exploration map, wherein the one or more objects of interest are situated in the exploration map, partitioning the exploration map into a multitude of 3D exploration blocks, and an autonomous exploration of the exploration map by means of the mobile robot, wherein the exploration comprises, by the laser scanner module, generating scan data related to a point cloud while the mobile robot is travelling along an exploration path.
System and method for dimensioning target objects
A method comprising obtaining, from a sensor, depth data representing a target object; selecting a model to fit to the depth data; for each data point in the depth data: defining a ray from a location of the sensor to the data point; and determining an error based on a distance from the data point to the model along the ray; when the depth data does not meet a similarity threshold for the model based on the determined errors, selecting a new model and repeating the error determination for the depth data based on the new model; when the depth data meets the similarity threshold for the model, selecting the model as representing the target object; and outputting the selected model representing the target object.