G05D1/0246

Lane marking

Systems and methods for the detection and analysis of road markings and other road objects are described. A method for detection of road markings comprises identifying image data including lane markings associated with a road segment, defining a plurality of subsections for the road segment, identifying boundary recognition observations for the lane markings from the image data corresponding to the at least one of the plurality of subsections for the road segments, calculating one or more clusters for the boundary recognition observations according to color or intensity, and outputting a lane marking indicator indicating the color or the intensity, for the at least one of the plurality of subsections for the road segments, in response to the one or more clusters.

Automatic wall climbing type radar photoelectric robot system for non-destructive inspection and diagnosis of damages of bridge and tunnel structure

An automatic wall climbing type radar photoelectric robot system for damages of a bridge and tunnel structure, mainly including a control terminal, a wall climbing robot and a server. The wall climbing robot generates a reverse thrust by rotor systems, moves flexibly against the surface of a rough bridge and tunnel structure by adopting an omnidirectional wheel technology, and during inspection by the wall climbing robot, bridges and tunnels do not need to be closed, and the traffic is not affected. Bridges and tunnels can divide into different working regions only by arranging a plurality of UWB base stations, charging and data receiving devices on the bridge and tunnel structure by means of UWB localization, laser SLAM and IMU navigation technologies, a plurality of wall climbing robots supported to work at the same time, automatic path planning and automatic obstacle avoidance realized, and unattended regular automatic patrolling can be realized.

Systems and methods for updating an electronic map

Systems and methods for updating an electronic map of a facility are disclosed. The electronic map includes a set of map nodes. Each map node has a stored image data associated with a position within the facility. The method includes collecting image data at a current position of a self-driving material-transport vehicle; searching the electronic map for at least one of a map node associated with the current position and one or more neighboring map nodes within a neighbor threshold to the current position; comparing the collected image data with the stored image data of the at least one of the map node and the one or more neighboring map nodes to determine a dissimilarity level. The electronic map may be updated based at least on the collected image data and the dissimilarity level. The image data represents one or more features observable from the current position.

Artificial intelligence robot and method of controlling the same
11557387 · 2023-01-17 · ·

An artificial intelligence (AI) robot includes a body for defining an exterior appearance and containing a medicine to be discharged according to a medication schedule, a support, an image capture unit for capturing an image within a traveling zone to create image information, and a controller for discharging the medicine to a user according to the medication schedule, reading image data of the user to determine whether the user has taken the medicine, and reading image data and biometric data of the user after the medicine-taking to determine whether there is abnormality in the user. The AI robot identifies a user and discharges a medicine matched with the user, so as to prevent errors. The AI robot detects a user's reaction after medicine-taking through a sensor, and performs deep learning, etc. to learn the user's reaction, to determine an emergency situation, etc. and cope with a result of the determination.

Localization using dynamic landmarks

A method, system and computer program product for determining a map position of an ego-vehicle are disclosed. The method includes acquiring map data comprising a road geometry, initializing at least one dynamic landmark by measuring a position and velocity, relative to the ego-vehicle, of a surrounding vehicle, and determining a first map position of the surrounding vehicle based on this measurement and the geographical position of the ego-vehicle. Further, the method includes predicting a second map position of the surrounding vehicle, and measuring a location, relative to the ego-vehicle, of the surrounding vehicle when it is estimated to be at the second map position, whereby the geographical position of the ego-vehicle can be computed and updated.

CONTROL METHOD FOR LIGHT SOURCES OF VISION MACHINE, AND VISION MACHINE

A control method for light sources of a vision machine and the vision machine. The control method includes the following steps: activating at least one first light source among n light sources to sense spatial information of an object in a field of view; and selectively activating the n light sources according to the spatial information of a sensed object; wherein the n light sources are distributed on a periphery of a front mirror surface of a lens of the vision machine, and n is a natural number greater than or equal to 2. The embodiment of the present disclosure enlarges the field of view of the vision machine, capable of providing corresponding light illumination based on environmental requirements, reducing the interference signal caused by reflection of a single light source, expanding the sensing range of the vision machine, and improving the sensing ability.

Terrain trafficability assessment for autonomous or semi-autonomous rover or vehicle

A rover or semi-autonomous or autonomous vehicle may use an image classifier to determine a terrain class of regions of an image of the terrain ahead of the rover or vehicle. The regions of the images are used to estimate the slope of the terrain for the different regions. The terrain class and slope are used to predict an amount of slip the rover will experience when traversing the terrain of the different regions. A heuristic mapping for the terrain class may be applied to the predicted slip amount to determine a hazard level for the rover or vehicle traversing the terrain.

DETECTING UNTRAVERSABLE ENVIRONMENT AND PREVENTING DAMAGE BY A VEHICLE

A vehicle moves through an environment (e.g., a farming, construction, mining, or forestry environment) and performs one or more actions in the environment. Portions of the environment may include moisture, such as puddles or mud patches. A control system associated with the vehicle may include a traversability model or a moisture model to help the vehicle operate in the environment with the moisture. In particular, the control system may employ the traversability model to reduce the likelihood of the vehicle attempting to traverse an untraversable portion of the environment, and the control system may employ the moisture model to reduce the likelihood of the vehicle performing an action that will damage a portion of the environment.

System and method for movement detection
11593950 · 2023-02-28 · ·

Systems and methods for movement detection are provided. In one example embodiment, a computer-implemented method includes obtaining image data and range data representing a scene external to an autonomous vehicle, the image data including at least a first image and a second image that depict the scene. The method includes identifying a set of corresponding image features from the image data, the set of corresponding image features including a first feature in the first image having a correspondence with a second feature in the second image. The method includes determining a respective distance for each of the first feature and the second feature based at least in part on the range data. The method includes determining a velocity associated with a portion of a scene represented by the set of corresponding image features based at least in part on the respective distance for the first feature and the second feature.

Object detection in vehicles using cross-modality sensors

A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.