Patent classifications
G06T2207/30261
UAV HARDWARE ARCHITECTURE
A UAV includes an application processing circuit configured to process primary image data obtained by a primary imaging senor carried by a gimbal; a real-time sensing circuit configured to process, in a real time manner, secondary image data obtained by a secondary imaging sensor not carried by a gimbal; and a flight control circuit. The flight control circuit is configured to communicate, through a first communication channel, with the application processing circuit to receive the processed primary image data and use the processed primary image data to control the UAV to perform a first function; and communicate, through a second communication channel, with the real-time sensing circuit to receive the processed secondary image data and use the processed secondary image data to control the UAV to perform a second function. The second function is different from the first function. The second communication channel is independent from the first communication channel.
DRIVER ASSITANCE APPARATUS AND DRIVER ASSITANCE METHOD
Provided is a driver assistance apparatus, including: a camera mounted on a vehicle and configured to have a field of view facing a front of the vehicle and acquire image data; and a controller including a processor configured to process the image data, wherein the controller is configured to calculate a target heading angle and a target lateral position of the vehicle based on a predetermined yaw rate pattern, and when a driver operates a steering device that steers the vehicle to avoid a collision of the vehicle, control the steering device to assist in steering for avoiding the collision of the vehicle, based on at least one of the image data, the target heading angle or the target lateral position.
METHOD AND DEVICE FOR FUSION OF IMAGES
A method of fusing images includes obtaining an optical image of a first scene with a first camera. A thermal image of the first scene is obtained with a second camera. The optical image is fused with the thermal image to generate a fused image.
GENERATING A GROUND PLANE FOR OBSTRUCTION DETECTION
A control system receives image data of a portion of a field, where the image data comprises a plurality of pixels each representing a three dimensional (3D) coordinate in the field. The control system applies an obstruction identification model to the image data to identify an obstruction in the field. The obstruction identification model determines a seed segment for the field portion and determines a detection ground plane by extrapolating a seed segment to the 3D coordinates of the plurality of pixels. The obstruction identification model identifies a set of pixels of the plurality of pixels that have three dimensional coordinates above the detection ground plane as the obstruction. The control system executes an action for the farming machine to avoid the obstruction in the field based on the obstruction identification model's identification of the obstruction.
ENSEMBLE LEARNING FOR CROSS-RANGE 3D OBJECT DETECTION IN DRIVER ASSIST AND AUTONOMOUS DRIVING SYSTEMS
A cross-range 3D object detection method and system operable for training a 3D object detection model with N sub-groups of a point cloud corresponding to N detection distance ranges to form N 3D object detection models forming an ensemble 3D object detection model. Training the 3D object detection model with the N sub-groups of the point cloud corresponding to the N detection distance ranges includes training the 3D object detection model progressively from distant to near. Training the 3D object detection model with the N sub-groups of the point cloud corresponding to the N detection distance ranges includes, each time the 3D object detection model converges, saving resulting weights and adding a corresponding network to the ensemble 3D object detection model.
COLLISION AVOIDANCE USING AN OBJECT CONTOUR
Techniques for collision avoidance using an object contour are discussed. A trajectory associated with a vehicle may be received. Sensor data can be received from a sensor associated with the vehicle. A bounding contour may be determined and associated with an object represented in the sensor data. Based on the trajectory, a simulated position of the vehicle can be determined. Additionally, a predicted position of the bounding contour can be determined. Based on the simulated position of the vehicle and the predicted position of the bounding contour, a distance between the vehicle and the object may be determined. An action can be performed based on the distance between the vehicle and the object.
Vehicle and method for avoiding a collision of a vehicle with one or more obstacles
A vehicle (100) may include one or more image sensors (110) configured to provide sensor image data (112d) representing a sensor image of a vicinity of the vehicle (100), and one or more processors (120) configured to determine one or more obstacles (132) from the sensor image data (112d), to determine a distance from ground for each of the one or more obstacles (132) based its corresponding image object (114), and to trigger a safety operation when the distance from ground is equal to or less than a safety height associated with the vehicle (100). A method for avoiding a collision of a vehicle with one or more obstacles.
Machine-trained network for misalignment-insensitive depth perception
Some embodiments of the invention provide a novel method for training a multi-layer node network to reliably determine depth based on a plurality of input sources (e.g., cameras, microphones, etc.) that may be arranged with deviations from an ideal alignment or placement. Some embodiments train the multi-layer network using a set of inputs generated with random misalignments incorporated into the training set. In some embodiments, the training set includes (i) a synthetically generated training set based on a three-dimensional ground truth model as it would be sensed by a sensor array from different positions and with different deviations from ideal alignment and placement, and/or (ii) a training set generated by a set of actual sensor arrays augmented with an additional sensor (e.g., additional camera or time of flight measurement device such as lidar) to collect ground truth data.
Concept update and vehicle to vehicle communication
A method for a concept update, the method may include detecting that a certain signature of an object causes a false detection; the certain signature belongs to a concept structure that comprises multiple signatures; wherein the false detection comprises determining that the object is represented by the concept structure while the object is of a certain type that is not related to the concept structure; searching for an error inducing part of the certain signature that induced the false detection; and removing from the concept structure the error inducing part to provide an updated concept structure.
Pothole detection system
Example implementations described herein are directed to depression detection on roadways (e.g., potholes, horizontal panel lines of a roadway, etc.) through using vision sensor to realize improved safety for advanced driver assistance systems (ADAS) and autonomous driving (AD). Example implementations described herein detect candidate depressions in the roadway in real time and adjust the control of the vehicle system according to the detected depressions.