G06T2207/30261

INTELLIGENT RESCUE METHOD, RESCUE DEVICE, AND VEHICLE
20230130609 · 2023-04-27 ·

An intelligent rescue method applied to a vehicle-mounted device and an airborne rescue device to enable semi-automatic warning and rescue of a broken-down, crashed, drowned, or stranded vehicle, enables communication between the vehicle-mounted device and the rescue device. The vehicle-mounted device determines by sensors a type of emergency of a vehicle, and performs a first assistance action and sends the rescue device a distress signal corresponding to the type of the emergency of the vehicle. The rescue device receives the distress signal and takes off from an initial position of the vehicle to a target position in response to the distress signal. Once the rescue device reaches the target position, the rescue device performs a second action for assistance.

Crowd sourcing data for autonomous vehicle navigation

Systems and methods of processing crowdsourced navigation information for use in autonomous vehicle navigation are disclosed. A method may include processing, by a mapping server, crowdsourced navigation information from a plurality of vehicles obtained by sensors coupled to the plurality of vehicles, wherein the navigation information describes road lanes of a road segment; collecting data about landmarks identified proximate to the road segment, the landmarking including a traffic sign; generating, by the mapping server, an autonomous vehicle map for the road segment, wherein the autonomous vehicle map includes a spline corresponding to a lane in the road segment and the landmarks identified proximate to the road segment; and distributing, by the mapping server, the autonomous vehicle map to an autonomous vehicle for use in autonomous navigation over the road segment.

Object recognition device and vehicle control system

A vehicle includes a LIDAR. An object recognition device: sets a space having a height from an absolute position of a road surface as a noise space; classifies a LIDAR point cloud into a first point cloud included in the noise space and a second point cloud outside the noise space; extracts a fallen object candidate being a candidate for a fallen object on the road surface based on the first point cloud; extracts a tracking target candidate being a candidate for a tracking target based on the second point cloud; determines whether horizontal positions of the fallen object candidate and the tracking target candidate are consistent with each other; integrates the fallen object candidate and the tracking target candidate whose horizontal positions are consistent with each other, to be the tracking target; and recognizes the fallen object candidate not integrated into the tracking target as the fallen object.

Method and apparatus for method for dynamic multi-segment path and speed profile shaping

The present application relates to determining a location of an object in response to a sensor output, generating a first vehicle path in response to the location of the object and a map data, determining an undrivable area within the first vehicle path, generating a waypoint outside of the undrivable area, generating a second vehicle path from a first point on the first vehicle path to the waypoint and a third vehicle path from the waypoint to a second point on the first vehicle path such that the second vehicle path and the third vehicle path are outside of the undrivable area, generating a control signal in response to the second vehicle path, the third vehicle path and and controlling a vehicle in response to the control signal such that the vehicle follows the second vehicle path and the third vehicle path.

Environment perception device and method of mobile vehicle

The disclosure provides an environment perception device and method of a mobile vehicle. The environment perception device includes a camera module, a LiDAR module, a database and a processing circuit. The camera module photographs a field near the mobile vehicle to generate a three-dimensional (3D) image frame. The LiDAR module scans the field to generate a 3D scanned frame. The processing circuit fuses the 3D image frame and the 3D scanned frame to generate 3D object information. The processing circuit compares the 3D object information with a 3D map in the database to determine whether an object is a static object. The processing circuit performs an analysis and calculation on the 3D object information to obtain movement characteristics of the object when the object is not the static object, and skips the analysis and calculation on the 3D object information when the object is the static object.

Adaptive object tracking algorithm for autonomous machine applications

In various examples, lane location criteria and object class criteria may be used to determine a set of objects in an environment to track. For example, lane information, freespace information, and/or object detection information may be used to filter out or discard non-essential objects (e.g., objects that are not in an ego-lane or adjacent lanes) from objects detected using an object detection algorithm. Further, objects corresponding to non-essential object classes may be filtered out to generate a final filtered set of objects to be tracked that may be of a lower quantity than the actual number of detected objects. As a result, object tracking may only be executed on the final filtered set of objects, thereby decreasing compute requirements and runtime of the system without sacrificing object tracking accuracy and reliability with respect to more pertinent objects.

Automated guided vehicle navigation device and method thereof

An AGV navigation device is provided, which includes a RGB-D camera, a plurality of sensors and a processor. When an AGV moves along a target route having a plurality of paths, the RGB-D camera captures the depth and color image data of each path. The sensors (including an IMU and a rotary encoder) record the acceleration, the moving speed, the direction, the rotation angle and the moving distance of the AGV moving along each path. The processor generates training data according to the depth image data, the color image data, the accelerations, the moving speeds, the directions, the moving distances and the rotation angles, and inputs the training data into a machine learning model for deep learning in order to generate a training result. Therefore, the AGV navigation device can realize automatic navigation for AGVs without any positioning technology, so can reduce the cost of automatic navigation technologies.

Advanced driver assistance system, vehicle having the same and method for controlling the vehicle

A control method of a vehicle may include generating navigation information based on destination information and current location information; determining whether a lane to be driven is a merge lane based on the generated navigation information, map information, and the current location information; recognizing a lane in an image acquired by an imaging device; recognizing a driving first lane based on the recognized location information of the lane; dividing a certain area including the merge lane into an entry section, a merge section, and a stabilization section when the first lane converges with a second lane; generating a driving route by performing curve fitting for route points in the entry section and the merge section; and controlling autonomous driving based on the generated driving route.

IMAGE SELECTION METHOD, SELF-PROPELLED APPARATUS, AND COMPUTER STORAGE MEDIUM
20230122704 · 2023-04-20 ·

An image selection method, a self-propelled device, and a computer readable storage medium. The method comprises: when travelling, a self-propelled device first performs image acquisition on the surrounding environment by means of an image acquisition apparatus (101); then, when an acquired image contains an identifiable obstacle, the image is given a score according to a scoring rule, the score being for indicating the imaging quality of the identifiable obstacle in the image (102); and finally, after receiving a command requesting to view an image of the identifiable obstacle, the image that contains the identifiable obstacle and that has the highest score is selected as an image to be displayed (103).

TEMPORAL INFORMATION PREDICTION IN AUTONOMOUS MACHINE APPLICATIONS

In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.