B60R2300/107

Three-dimensional image processing device and three-dimensional image processing method for object recognition from a vehicle
11004218 · 2021-05-11 · ·

A three-dimensional image processing device includes: an input unit configured to acquire a first taken image and a second taken image respectively from a first imaging unit and a second imaging unit; and a stereo processing unit configured to execute stereo processing and then outputs a range image, for a common part where an imaging region of the first taken image and an imaging region of the second taken image have an overlap with each other, an imaging direction of the first imaging unit and an imaging direction of the second imaging unit are set toward a horizontal direction, and both side parts of the imaging region of the first imaging unit and both side parts of the imaging region of the second imaging unit are set as common parts.

Vehicle traveling environment detecting apparatus and vehicle traveling controlling system

A vehicle traveling environment detecting apparatus includes first to third stereo cameras, first to third image processors, and an image controller. The first stereo camera includes first and second cameras. The second stereo camera includes the first camera and a third camera. The third stereo camera includes the second camera and a fourth camera. The first to third image processors are configured to perform stereo image processing on first to third outside images and thereby determine first to third image processing information including first to third distance information, respectively. The first to third outside images are configured to be obtained through imaging of an environment outside the vehicle by the first to third stereo cameras, respectively. The image controller is configured to perform integration of the first image processing information, the second image processing information, and the third image processing information and thereby recognize a traveling environment of the vehicle.

Wide view registered image and depth information acquisition
10979633 · 2021-04-13 · ·

A camera system produces omnidirectional RGBD (reg-green-blue-depth) data, similar to a LiDAR but with additional registered RGB data. The system uses multiple cameras, fisheye lenses and computer vision procedures to compute a depth map. The system produces 360 RGB and depth, from a single viewpoint for both RGB and depth, without requiring stitching. RGB and depth registration may be obtained without extra computation and result presents zero parallax misalignment.

Watercraft thermal monitoring systems and methods

A watercraft may include a safety system having an imaging component and a control component. The control component may modify the operation of the watercraft based on images from the imaging component. The imaging component may include a thermal imaging component and a non-thermal imaging component. The watercraft may include more than one imaging component disposed around the periphery of the watercraft to monitor a volume surrounding the watercraft for objects in the water such as debris, a person, and/or dock structures. Operating the watercraft based on the images may include operating propulsion and/or steering systems of the watercraft based on a detected object. The control component may operate the propulsion and/or steering systems to disable a propeller when a swimmer is detected, to avoid detected debris, and/or to perform or assist in performing docking maneuvers. The imaging components may include compact thermal imaging modules mounted on or within the hull of the watercraft.

SYSTEM AND METHOD FOR PROVIDING ADAPTIVE TRUST CALIBRATION IN DRIVING AUTOMATION
20210078608 · 2021-03-18 ·

A system and method for providing adaptive trust calibration in driving automation that include receiving image data of a vehicle and vehicle automation data associated with automated of driving of the vehicle. The system and method also include analyzing the image data and vehicle automation data and determining an eye gaze direction of a driver of the vehicle and a driver reliance upon automation of the vehicle and processing a Markov decision process model based on the eye gaze direction and the driver reliance to model effects of human trust and workload on observable variables to determine a control policy to provide an optimal level of automation transparency. The system and method further include controlling autonomous transparency of at least one driving function of the vehicle based on the control policy.

Camera device

The present invention provides a camera device capable of quickly processing image data while suppressing bus traffic. In the present invention, an image memory 50 is connected to a memory bus 70 and stores a right source image 220 and a left source image 210. A memory access management 40 is connected to the memory bus 70 and to an internal bus 80, reads the right source image 220 and the left source image 210 from the image memory 50 via the memory bus 70, and outputs the read right source image 220 and the left source image 210 to the internal bus 80. Processing unit A 30, processing unit B 31, and processing unit C 32 are connected to the internal bus 80 and process the image data output to the internal bus 80.

SYSTEM AND PROCESS FOR VIEWING IN BLIND SPOTS
20210213881 · 2021-07-15 · ·

There is disclosed a viewing system coupled to a motor vehicle having a frame having a roof, at least one support, and a body with the at least one support supporting the roof over the body. The system can comprise at least one camera, at least one screen coupled to the support. In addition each camera is coupled to the at least one support and wherein said at least one screen is in communication with the first set of cameras, wherein said at least one screen displays images presented by the first set of cameras. This device can provide additional view in the blind spot of the vehicle.

Information processing device, information processing system, program, and information processing method

An information processing device has an obtaining unit, a controller, and a giving unit. The obtaining unit can obtain vehicle information. The controller creates first warning information when the passability of a road in the vehicle information indicates non-passable. The controller creates recovery information when the passability of the road at the same position in the vehicle information indicates passable, after creation of the first warning information. The giving unit gives the recovery information.

SURROUND VIEW
20210027522 · 2021-01-28 ·

A system on a chip (SoC) includes a digital signal processor (DSP) and a graphics processing unit (GPU) coupled to the DSP. The DSP is configured to receive a stream of received depth measurements and generate a virtual bowl surface based on the stream of received depth measurements. The DSP is also configured to generate a bowl to physical camera mapping based on the virtual bowl surface. The GPU is configured to receive a first texture and receive a second texture. The GPU is also configured to perform physical camera to virtual camera transformation on the first texture and on the second texture, based on the bowl to physical camera mapping, to generate an output image.

Position detection apparatus and position detection method

A parallax image in which a parallax is associated with each pixel is acquired from an image captured by a stereo camera. The parallax is voted to a two-dimensional matrix. A line is extracted from a coordinate system in which an X-axis indicates the parallax and a Y-axis indicates the pixel in the vertical direction. In the case where a first line and a second line are detected, an X-coordinate of the start point coordinates of the second line is larger than an X-coordinate of end point coordinates of the first line, and the difference between the slope of the first line and the slope of the second line falls within a permissible range, a determination section determines that a low position is present at a position farther from the movable body than a position in real space corresponding to a Y-coordinate of start point coordinates.