G06V10/803

Vehicle control system, vehicle control method, and vehicle control program
11247682 · 2022-02-15 · ·

A vehicle control system includes: a recognizer that is configured to recognize one or more surrounding vehicles present in a second lane different from a first lane in which a subject vehicle is present; an identifier that is configured to derive an index value according to a cutting-in probability for a side in front of the subject vehicle for a surrounding vehicle recognized by the recognizer and is configured to identify a surrounding vehicle of which the derived index value is equal to or larger than a threshold as a cutting-in vehicle; and a running controller that is configured to decelerate the subject vehicle in accordance with presence of the cutting-in vehicle identified by the identifier and is configured to determine a degree of change in deceleration of the subject vehicle on the basis of the index value of the cutting-in vehicle that is a target when the subject vehicle is decelerated.

VEHICLE GUIDANCE SYSTEM

A vehicle guidance system facilitates maneuvering of an autonomous vehicle with respect to an object in a scene. The vehicle includes a steering apparatus having a range of angular positions and a multitude of actuators for controlling the dynamics of the vehicle. The system includes a steering angle sensor, a camera device, and a video processing module. The sensor is configured to monitor the angular position of the wheel. The device is configured to capture an original image of a scene having the object. The module is configured to receive and process the original image from the device, detect the object in the original image, receive and process the angular position from the sensor, generate a dynamic trajectory based on at least the angular position, orientate the dynamic trajectory with regard to the object, and operate at least one of the actuators to guide the vehicle along the dynamic trajectory.

Cloud-based image improvement
09811910 · 2017-11-07 · ·

Approaches are described for managing the processing of image or video data captured by a portable computing device. The device provides a set of images to a remote server executing “in the cloud”. The set of images can include a reference image and at least one other image captured subsequent or prior to the reference. Upon receiving the set of images at the remote server operating, the remote server can process the images to determine a similarity between the reference image and each of the other images. Thereafter, each image having a similarity value above a similarity value threshold can be aligned with the reference image, and the pixel values for corresponding locations in each of the images can be combined to create a processed image. The processed images can be provided to the computing device from the remote server, where the user can decide to accept or discard the image.

Instance segmentation using sensor data having different dimensionalities
11250240 · 2022-02-15 · ·

Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.

OUTSIDE ENVIRONMENT RECOGNITION DEVICE
20220237899 · 2022-07-28 · ·

A recognition processor recognizes an external environment of a mobile object, based on image data acquired by an imaging unit that takes an image of the external environment of the mobile object, and outputs a recognition result of the external environment of the mobile object, based on the external environment of the mobile object recognized based on the image data and the detection result from a detection unit that detects the external environment of the mobile object. An abnormality detector detects an abnormality of a data processing system including the imaging unit, the recognition processor, and detection unit, based on consistency between the external environment recognized by the recognition processor, based on the image data, and the external environment detected by the detection unit.

ELECTRONIC DEVICE FOR VEHICLE AND METHOD OF OPERATING ELECTRONIC DEVICE FOR VEHICLE

Disclosed is an electronic device for a vehicle, including a processor receiving first image data from a first camera, receiving second image data from a second camera, receiving first sensing data from a first lidar, generating a depth image based on the first image data and the second image data, and fusing the first sensing data for each of divided regions in the depth image.

GESTURE RECOGNITION SYSTEM EMPLOYING THERMAL SENSOR AND IMAGE SENSOR
20210406519 · 2021-12-30 ·

There is provided a recognition system adaptable to a portable device or a wearable device. The recognition system senses a body heat using a thermal sensor, and performs functions such as the living body recognition, image denoising and body temperature prompting according to detected results.

METHOD AND SYSTEM FOR DETECTING AND ANALYZING OBJECTS

A method for detecting objects and labeling the objects with distances in an image includes steps of: obtaining a thermal image from a thermal camera, an RGB image from an RGB camera, and radar information from an mmWave radar; adjusting the thermal image based on the RGB image to generate an adjusted thermal image, and generating a fused image based on the RGB image and the adjusted thermal image; generating a second fused image based on the fused image and the radar information; detecting objects in the images, and generating, based on the fused image, another fused image including bounding boxes marking the objects; and determining motion parameters of the objects.

REARVIEW CAMERA FIELD OF VIEW WITH ALTERNATIVE TAILGATE POSITIONS
20220227296 · 2022-07-21 ·

Systems and methods for a rear-viewing camera system for a vehicle. A first camera is mounted on the vehicle with a field of view that is at least partially obstructed by a tailgate of the vehicle and/or a load carried by the vehicle. A second camera is mounted on the vehicle with a field of view that includes an unobstructed view of an imaging area that is obstructed in the field of view of the first camera. A tailgate position sensor is configured to output a signal indicative of a current position of the tailgate of the vehicle. By determining a position of the tailgate, an electronic controller is configured to generate an output image in which the tailgate and/or the load appear at least partially transparent by replacing image data that is obstructed in the image captured by the first camera with image data captured by the second camera.

SYSTEMS AND METHODS FOR EFFECTING MAP LAYER UPDATES BASED ON COLLECTED SENSOR DATA

Examples disclosed herein may involve a computing system configured to (i) maintain a map that is representative of a real-world environment, the map including a plurality of layers that are each encoded with a different type of map data, (ii) obtain sensor data indicative of a given area of the real-world environment, (iii) based on an evaluation of the obtained sensor data and map data corresponding to the given area, detect that a change has occurred in the given area, (iv) based on the collected sensor data, derive information about the detected change including at least a type of the change and a location of the change, (v) based on the derived information about the detected change, determine that one or more layers of the map is impacted by the detected change, and (vi) effect an update to the one or more layers of the map based on the derived information about the change.