Patent classifications
B60W60/0016
Lane detection and tracking techniques for imaging systems
A method for tracking a lane on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes generating, by the one or more processors, a predicted spline comprising (i) a first spline and (ii) a predicted extension of the first spline in a direction in which the imaging system is moving. The first spline describes a boundary of a lane and is generated based on the set of pixels. The predicted extension of the first spline is generated based at least in part on a curvature of at least a portion of the first spline.
Non-solid object monitoring
An autonomous navigation system may navigate through an environment in which one or more non-solid objects, including gaseous and/or liquid objects, are located. Non-solid objects may be determined, using sensor data, to present an obstacle or interference based on determined chemical composition, size, position, velocity, concentration, etc. of the objects.
Methods and systems for joint pose and shape estimation of objects from sensor data
Methods and systems for jointly estimating a pose and a shape of an object perceived by an autonomous vehicle are described. The system includes data and program code collectively defining a neural network which has been trained to jointly estimate a pose and a shape of a plurality of objects from incomplete point cloud data. The neural network includes a trained shared encoder neural network, a trained pose decoder neural network, and a trained shape decoder neural network. The method includes receiving an incomplete point cloud representation of an object, inputting the point cloud data into the trained shared encoder, outputting a code representative of the point cloud data. The method also includes generating an estimated pose and shape of the object based on the code. The pose includes at least a heading or a translation and the shape includes a denser point cloud representation of the object.
Method and system for controlling autonomous vehicles to affect occupant view
A system and method for controlling an autonomous vehicle to affect a view seen by an occupant of the autonomous vehicle is described. In one embodiment, a method for controlling an autonomous vehicle to affect a view seen by an occupant of the autonomous vehicle includes determining a navigation route, determining content associated with the navigation route, monitoring current conditions of the autonomous vehicle and the occupant, determining, based on the current conditions, whether to change a position of the vehicle to affect the view seen by the occupant, and when the current conditions permit, moving the autonomous vehicle to affect the view seen by the occupant.
Mobile body and management system
An automated driving vehicle (200) includes a communication device (220), a biometric information acquirer (240), and an automated driving controller (250). The communication device (220) is configured to transmit biometric information acquired by the biometric information acquirer (240) to an external device and receives a response signal including attribute information for the transmitted biometric information. The automated driving controller (250) is configured to execute automated driving according to route information formed on the basis of the attribute information included in the received response signal.
Information processing method, server, and intelligent mobile robot
An information processing method, a server, and an intelligent mobile robot, so that when the intelligent mobile robot encounters a danger, the intelligent mobile robot exchanges information with the server to achieve a purpose of escaping from a scene to a safe place. The method includes: receiving, by a server, a danger alarm sent by an intelligent mobile robot, where the danger alarm is used to indicate that the intelligent mobile robot detects a dangerous event; determining, by the server, a safe position for the intelligent mobile robot, where the safe position is a position in which the dangerous event does not occur currently; and sending, by the server, an escape instruction to the intelligent mobile robot, where the escape instruction includes the safe position.
Driving assistance system and driving assistance method
A driving assistance system and a driving assistance method are provided. The driving assistance system includes a physiological information sensing system, an external physical symptom detection system, and a processing device. The physiological information sensing system is configured to sense physiological information of a driver. The external physical symptom detection system is configured to detect an external physical symptom of the driver. The processing device is coupled to the physiological information sensing system and the external physical symptom detection system. When the physiological information of the driver and the external physical symptom of the driver are abnormal, the processing device initiates an emergency procedure.
MULTI-VIEW DEEP NEURAL NETWORK FOR LIDAR PERCEPTION
A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
Thermal Model for Preventing Component Overheating
A computer-implemented method may include receiving, by a computing system on an autonomous vehicle, local weather data and routing data of the autonomous vehicle, the autonomous vehicle including a sensor platform mounted on the autonomous vehicle. The method may include based on the local weather data and the routing data, predicting, by the computing system using a thermal model, a thermal loading and a temperature of a component(s) of a sensor platform at one or more future times, the thermal loading and the temperature being during an operation of the autonomous vehicle. The method may include sending an overheating warning to a controller of the autonomous vehicle based on a determination that the thermal loading or the temperature of the component(s) exceeds a threshold. The method may include reporting a degraded state of the autonomous vehicle to park prior to a predicted overheating of the component(s).
Multi-view deep neural network for LiDAR perception
A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.