Patent classifications
G06V20/588
Identification of Proxy Calibration Targets for a Fleet of Vehicles
Example embodiments relate to identification of proxy calibration targets for a fleet of sensors. An example method includes collecting, using a sensor coupled to a vehicle, data about one or more objects within an environment of the vehicle. The sensor has been calibrated using a ground-truth calibration target. The method also includes identifying, based on the collected data, at least one candidate object, from among the one or more objects, to be used as a proxy calibration target for other sensors coupled to vehicles within a fleet of vehicles. Further, the method includes providing, by the vehicle, data about the candidate object for use by one or more vehicles within the fleet of vehicles.
PATH PERCEPTION DIVERSITY AND REDUNDANCY IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
SYSTEMS AND METHODS FOR PREDICTING BLIND SPOT INCURSIONS
Systems and methods are provided for predicting blind spot incursions for a host vehicle. In one implementation, a navigation system for a host vehicle may comprise a processor. The processor may be programmed to receive, from an image capture device located on a rear of the host vehicle, at least one image representative of an environment of the host vehicle. The processor may be programmed to analyze the at least one image to identify an object in the environment of the host vehicle and to determine kinematic information associated with the object. The processor may further be programmed to predict, based on the kinematic information, that the object will travel in a region outside of a field of view of the image capture device and perform a control action based on the prediction.
Method for Operating a Vehicle and Device for Carrying Out the Method
A method for operating a vehicle includes carrying out a lane-keeping control of the vehicle along a course of a lane travelled in by the vehicle. When lane markings are detected, the course of the lane is determined on a basis of detected lane markings. When lane markings are not detected, the course of the lane is determined in a mapped-based manner on a basis of data from a digital map where a rough localization of the vehicle in the digital map and a fine localization of the vehicle in the digital map is performed.
OBJECT DETECTION AND TRACKING
A method and a computing device for object detection and tracking from a video input are described. The method and the computing device may be used to, for example, track objects of interest, such as lane markings, in traffic. A plurality of frames corresponding to a video may be analyzed in a spatiotemporal domain by a neural network. The neural network may be trained using data synthesized in the spatiotemporal domain.
AUTOMOTIVE SENSOR INTEGRATION MODULE
An automotive sensor integration module including a plurality of sensors which differ in at least one of a sensing period or an output data format, and a signal processing unit, which simultaneously outputs, as sensing data, pieces of detection data respectively output from the plurality of sensors on the basis of the sensing period of any one of the plurality of sensors, determines whether each region of an outer cover corresponding to a location of each of the plurality of sensors is contaminated on the basis of the pieces of detection data, and outputs a determination result as contamination data.
METHOD AND SYSTEM FOR SMART ROAD DEPARTURE WARNING AND NAVIGATION ASSIST IN INCLEMENT WEATHER WITH LOW VISIBILITY
A method of operating a vehicle of determining whether the vehicle is operating in a road segment with a low visibility condition to cause a loss of input of sensor data to a vehicle controller that operates an assist feature, activating one or more adaptive alerts based on a road departure risk of the vehicle, and driver use of the assist feature in the upcoming road segment, wherein the road departure risk is determined by calculating a road departure risk index that compares an estimated vehicle path based on the vehicle state data with a probabilistic vehicle path for the upcoming road segment; and predicting whether will operate within an acceptable path in the upcoming road segment; and tracking the vehicle in the upcoming road segment based on vehicle navigation data to provide at least one adaptive alert based on a prediction of the road departure risk.
DUAL SENSOR READOUT CHANNEL TO ALLOW FOR FREQUENCY DETECTION
The present disclosure relates to navigation and to systems and methods for using a dual sensor readout channel to allow for frequency detection. In one implementation, at least one processing device may receive a plurality of images acquired by a camera onboard a host vehicle, wherein the plurality of images are received via a first channel and via a second channel, and wherein the first channel is associated with a first frame capture rate, and the second channel is associated with a second frame capture rate different from the first frame capture rate. The processing device may use images received via the first channel to detect flickering and non-flickering light sources in an environment of the host vehicle; and provide, based on images received via the second channel, images for showing on one or more human-viewable displays.
LANE EXTRACTION METHOD USING PROJECTION TRANSFORMATION OF THREE-DIMENSIONAL POINT CLOUD MAP
A lane extraction method uses projection transformation of a 3D point cloud map, by which the amount of operations required to extract the coordinates of a lane is reduced by performing deep learning and lane extraction in a two-dimensional (2D) domain, and therefore, lane information is obtained in real time. In addition, black-and-white brightness, which is most important information for lane extraction on an image, is substituted by the reflection intensity of a light detection and ranging (LiDAR) sensor so that a deep learning model capable of accurately extracting a lane is provided. Therefore, reliability and competitiveness is enhanced in the field of autonomous driving, the field of road recognition, the field of lane recognition, and the field of HD road maps for autonomous driving, and the fields similar or related thereto, and more particularly, in the fields of road recognition and autonomous driving using LiDAR.
VEHICLE DRIVE ASSIST APPARATUS
A vehicle drive assist apparatus to be applied to a vehicle executes at least emergency braking control for avoiding collision of the vehicle with a recognized object. The vehicle drive assist apparatus includes a surrounding environment recognition device and a traveling control unit. The surrounding environment recognition device includes a recognizer that recognizes a surrounding environment around the vehicle, and a feature information acquirer that acquires feature information of a target object in the recognized surrounding environment. The traveling control unit centrally controls the entire vehicle. The traveling control unit includes a determiner that determines, based on the acquired feature information, whether the target object has a possibility of hindering traveling of the vehicle. The traveling control unit continues normal traveling control for the vehicle when the target object is determined to have no possibility of hindering the traveling of the vehicle.