Patent classifications
G05D1/024
Method and system for performing dynamic LIDAR scanning
A light detection and ranging (LIDAR) controller is disclosed. The LIDAR controller may determine, based on a position of an implement, a scan area of the LIDAR sensor, wherein the scan area has an increased point density relative to another area of a field of view, of the LIDAR sensor, that includes the implement. The LIDAR controller may cause the LIDAR sensor to capture, with the increased point density, LIDAR data associated with the scan area. The LIDAR controller may process the LIDAR data to determine whether an object of interest is in an environment of the machine that is associated with the scan area. The LIDAR controller may perform an action based on the environment of the machine.
Cleaning robot
A cleaning robot includes a top cover, a bottom cover provided below the top cover, traveling parts provided in the bottom cover, a suction module provided in the bottom cover to suck in foreign materials on the ground, a recessed part firmed to be recessed inward between the top cover and the bottom cover, and a first sensor located in the recessed part.
Cleaning robot
A cleaning robot includes a top cover, a bottom cover formed below the top cover and configured to move by external force, a fixed body provided in the bottom cover, a first opening formed in an upper portion of the bottom cover and a first sensor connected to the fixed body and externally exposed between the top cover and the bottom cover through the first opening.
Control of autonomous vehicle based on environmental object classification determined using phase coherent LIDAR data
Determining classification(s) for object(s) in an environment of autonomous vehicle, and controlling the vehicle based on the determined classification(s). For example, autonomous steering, acceleration, and/or deceleration of the vehicle can be controlled based on determined pose(s) and/or classification(s) for objects in the environment. The control can be based on the pose(s) and/or classification(s) directly, and/or based on movement parameter(s), for the object(s), determined based on the pose(s) and/or classification(s). In many implementations, pose(s) and/or classification(s) of environmental object(s) are determined based on data from a phase coherent Light Detection and Ranging (LIDAR) component of the vehicle, such as a phase coherent LIDAR monopulse component and/or a frequency-modulated continuous wave (FMCW) LIDAR component.
LiDAR system for vehicle and operating method thereof
Disclosed is a light and detection ranging (LiDAR) system for a vehicle, which includes: a laser generator generating an optical signal having an address signal and a pulse signal; and a plurality of LiDAR sensors connected to an optical fiber bus, in which each of the plurality of LiDAR sensors determines whether the pulse signal of the optical signal is received according to the address signal of the optical signal.
Method and apparatus for performing grid-based locailization of a mobile body
A method of localizing a mobile body (MB) in a known environment, includes the following steps: a) defining an occupancy grid (G) modeling the environment; b) defining a set of position grids (Π) each position grid being associated to a heading of the mobile body; c) receiving a time series of measurements (z.sub.1, z.sub.2, . . . ) from a distance sensor carried by the mobile body; and d) upon receiving a measurement of the time series, updating the pose probabilities of the position grids as a function of present values of the occupancy probabilities and of the received measurement; wherein step d) is carried out by applying an inverse sensor model to the received measurement, while considering the distance sensor co-located with a detected obstacle and by applying Bayesian fusion to update the pose probabilities of the position grids.
PATHFINDING USING CENTERLINE HEURISTICS FOR AN AUTONOMOUS MOBILE ROBOT
To load and unload a trailer, an autonomous mobile robot determines its location and the location of objects within the trailer relative to the trailer itself, rather than relative to a warehouse. The autonomous mobile robot determines its location the location of objects within the trailer relative to the trailer. The autonomous mobile robot navigates within the trailer and manipulates objects within the trailer from the trailer's reference frame. Additionally, the autonomous mobile robot uses a centerline heuristic to compute a path for itself within the trailer. A centerline heuristic evaluates nodes within the trailer based on how far away those nodes are from the centerline. If the nodes are further away from the centerline, they are assigned a higher cost. Thus, when the autonomous mobile robot computes a path, the path is more likely to stay near the centerline of the trailer rather than get closer to the sides.
Plurality of autonomous mobile robots and controlling method for the same
A plurality of autonomous mobile robots includes a first mobile robot and a second mobile robot. The first mobile robot is provided with a transmitting optical sensor for outputting laser light, and a first module for transmitting and receiving an Ultra-Wideband (UWB) signal. The second mobile robot is provided with a receiving optical sensor for receiving the laser light and a plurality of second modules for transmitting and receiving the UWB signal. A control unit of the second mobile robot determines a relative position of the first mobile robot based on the received UWB signal and a determination of whether the laser light is received by the optical sensor.
Route determination method
In an environment in which a plurality of second pedestrians moves along predetermined movement patterns, a plurality of movement routes Rw when a first pedestrian moves toward a destination point is recognized. Data, in which an environmental image indicating a visual environment in front of a virtual robot when the virtual robot moves along each of the movement routes and a moving direction command indicating a moving direction of the virtual robot are combined, is generated as learning data. In the environmental image, colors corresponding to time-series displacement behaviors of a moving object image region is applied to at least a portion of the moving object image region indicating a pedestrian (moving object) present around a robot. Model parameters of a CNN (action model) is learned using the learning data, and a moving velocity command for a robot is determined using a learned CNN.
Control and Navigation Device for an Autonomously Moving System and Autonomously Moving System
The invention relates to a control and navigation device for an autonomously moving system, which comprises the following: a sensor device, which is configured to acquire sensor data, and for this purpose a LiDAR sensor installation, which is configured for 360-degree acquisition; a fisheye camera installation, which is configured for 360-degree acquisition; and a radar sensor installation, which is configured for 360-degree acquisition; a data processing installation with an AI-based software application, which is configured to determine control signals for purposes of navigating an autonomously moving system by means of processing of the sensor data; and a data communication interface, which is connected to the data processing installation, and is configured to provide the control signals for transmission to a controller of the autonomously moving system. The sensor device, the data processing installation, and the data communication interface are arranged at an assembly component, which is configured to assemble, in a detachable manner, the sensor device, the data processing installation, and the data communication interface together as a common module at the autonomously moving system. Furthermore, an autonomously moving system is provided.