Patent classifications
B60W2420/403
Enhanced infrastructure
A system includes a stationary infrastructure element including a camera mounted to the infrastructure element and an infrastructure server. The infrastructure server includes a processor and a memory, the memory storing instructions executable by the processor to receive a request from a movable vehicle, the request identifying a data anomaly including at least one of (1) a sensor of the vehicle collecting data below a confidence threshold or (2) a geographic location outside a geographic database of the vehicle, to actuate the camera to collect image data of one of the vehicle or the geographic location, to identify geo-coordinates of the vehicle or the geographic location based on identified pixels in the image data including the vehicle or the geographic location, and to provide the geo-coordinates to the vehicle to address the data anomaly.
Non-solid object monitoring
An autonomous navigation system may navigate through an environment in which one or more non-solid objects, including gaseous and/or liquid objects, are located. Non-solid objects may be determined, using sensor data, to present an obstacle or interference based on determined chemical composition, size, position, velocity, concentration, etc. of the objects.
MULTI-VIEW DEEP NEURAL NETWORK FOR LIDAR PERCEPTION
A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
Method for detecting at least one object present on a motor vehicle, control device, and motor vehicle
The invention relates to a method for detecting at least one object present on a motor vehicle, wherein by way of a control device, at least one camera image is captured by means of a camera, wherein a detection region of the camera is directed to an outer region of the motor vehicle, and partially or entirely to an outer surface of the motor vehicle. The invention proposes that for a clearance test in the at least one camera image, it is checked by an image analysis device of the control device, if at least one predetermined intrinsic structure of the motor vehicle is imaged, and in the event that at least one intrinsic structure cannot be found by the image analysis device, a blocking signal, which indicates that an object is present on the motor vehicle is generated.
Method for Controlling Vehicle and Vehicle Control Device
A method for controlling a vehicle including: based on map information including information of an installation position of a traffic light and information of a lane controlled by the traffic light and a range of the angle of view of a camera mounted on the own vehicle, calculating an imaging-enabled area in which an image of the traffic light can be captured on the lane by the camera; determining whether or not the own vehicle is positioned in the imaging-enabled area; and when the own vehicle is positioned in the imaging-enabled area, controlling the own vehicle in such a way that the traffic light is not shielded from the range of the angle of view of the camera by a preceding vehicle of the own vehicle.
Multi-view deep neural network for LiDAR perception
A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
METHOD AND SYSTEM FOR VARYING AN IMAGE PROCESSING FREQUENCY OF AN HD HEADLIGHT
A method is provided for varying an image processing frequency (4, 6) of an HD headlight. The method includes causing the HD headlight to reproduce an output video signal (2) provided by a headlight controller; providing an input video signal (1) to the headlight controller at an input image frequency; using the headlight controller for calculating the output video signal (2) in accordance with at least one image processing function in time within a predefined image processing frequency (4, 6); determining whether the headlight controller is burdened by a computing time expenditure required to perform the at least one image processing function within an available calculation time between two successive output video signals (2) at the predefined image processing frequency (4, 6), and varying the image processing frequency (4, 6) depending on a driving situation. A system also is provided to carry out the method.
Control apparatus for vehicle
Provided is a control apparatus for a vehicle, the control apparatus being configured to determine a road surface μ state of a road in front of the vehicle based on an image of a front region of the vehicle, and change a magnitude of an amount of reduction in a braking force per unit time in accordance with the determined road surface μ state in braking force cancel control executed after hill-hold control
Operation management device for automatic running vehicle and automatic running vehicle
An operating situation obtaining unit obtains the operating situation information of a plurality of operating vehicles along a predetermined route. A delayed vehicle extraction unit extracts a delayed vehicle that is delayed in actual operation relative to the operation schedule from among the plurality of operating vehicles, based on the operating situation information of the respective operating vehicles. An overtaking instruction unit outputs an overtaking instruction to overtake the delayed vehicle to a following vehicle that immediately follows the delayed vehicle.
Road surface detection apparatus, image display apparatus using road surface detection apparatus, obstacle detection apparatus using road surface detection apparatus, road surface detection method, image display method using road surface detection method, and obstacle detection method using road surface detection method
A histogram is calculated based on a road surface image of a portion around a vehicle, and the histogram is separated into a histogram that represents an in-sunlight road surface and includes a first peak value and a histogram that represents a shadow road surface and includes a second peak value to output the histograms, thereby further enhancing the accuracy of road surface detection around the vehicle as compared with conventional art.