G06V10/145

METHOD FOR ILLUMINATING VEHICLE SURROUNDINGS, AND MOTOR VEHICLE

A method for illuminating vehicle surroundings of a motor vehicle that comprises an illumination device and a detection device, wherein the illumination device is set up to illuminate at least part of a solid angle region of the vehicle surroundings with different illumination patterns, in particular with visible light, wherein the illumination patterns each predefine illumination intensities for different solid angle subregions of the solid angle region, comprising the steps of: illuminating the vehicle surroundings with a first of the illumination patterns by means of the illumination device, detecting a light pattern that results from the illumination of the vehicle surroundings with the first illumination pattern by means of the detection device, selecting a second of the illumination patterns on the basis of the detected light pattern, and illuminating the vehicle surroundings with the second illumination pattern by means of the illumination device.

METHOD FOR ILLUMINATING VEHICLE SURROUNDINGS, AND MOTOR VEHICLE

A method for illuminating vehicle surroundings of a motor vehicle that comprises an illumination device and a detection device, wherein the illumination device is set up to illuminate at least part of a solid angle region of the vehicle surroundings with different illumination patterns, in particular with visible light, wherein the illumination patterns each predefine illumination intensities for different solid angle subregions of the solid angle region, comprising the steps of: illuminating the vehicle surroundings with a first of the illumination patterns by means of the illumination device, detecting a light pattern that results from the illumination of the vehicle surroundings with the first illumination pattern by means of the detection device, selecting a second of the illumination patterns on the basis of the detected light pattern, and illuminating the vehicle surroundings with the second illumination pattern by means of the illumination device.

OBJECT RECOGNITION DEVICE

An object recognition device (10) includes a holding body (100), a light source (200), and an optical sensor (300). The holding body (100) extends in one direction. The light source (200) is attached to the holding body (100) along the one direction. The light source (200) applies light toward at least a part of a space (S) located on a side of the holding body (100) with respect to the one direction of the holding body (100), a front of the space (S), and a back of the space (S). The optical sensor (300) is attached to the holding body (100). At least a part of a visual field of the optical sensor (300) faces in at least a part of the space (S), the front of the space (S), and the back of the space (S).

OBJECT RECOGNITION DEVICE

An object recognition device (10) includes a holding body (100), a light source (200), and an optical sensor (300). The holding body (100) extends in one direction. The light source (200) is attached to the holding body (100) along the one direction. The light source (200) applies light toward at least a part of a space (S) located on a side of the holding body (100) with respect to the one direction of the holding body (100), a front of the space (S), and a back of the space (S). The optical sensor (300) is attached to the holding body (100). At least a part of a visual field of the optical sensor (300) faces in at least a part of the space (S), the front of the space (S), and the back of the space (S).

System Adapted to Detect Road Condition in a Vehicle and a Method Thereof

A system adapted to detect road condition in a vehicle and a method thereof uses geometrical laser projections and an image processing system. The system includes a laser source, an imaging unit and at least a processing unit. The laser source is adapted to project geometrical laser projections on the road. The imaging unit is adapted to capture images of the geometrical projections. The processing unit is configured to calculate a surface reflectance for the projected geometrical projection. Further it is configured to compute geometrical parameters of the projections at regular time intervals based on the captured images. It determines a road condition based on the surface reflectance and the geometrical parameters.

Associating three-dimensional coordinates with two-dimensional feature points
11580662 · 2023-02-14 · ·

An example method includes causing a light projecting system of a distance sensor to project a three-dimensional pattern of light onto an object, wherein the three-dimensional pattern of light comprises a plurality of points of light that collectively forms the pattern, causing a light receiving system of the distance sensor to acquire an image of the three-dimensional pattern of light projected onto the object, causing the light receiving system to acquire a two-dimensional image of the object, detecting a feature point in the two-dimensional image of the object, identifying an interpolation area for the feature point, and computing three-dimensional coordinates for the feature point by interpolating using three-dimensional coordinates of two points of the plurality of points that are within the interpolation area.

Calibration method for fingerprint sensor and display device using the same

Provided herein are a calibration method for a fingerprint sensor and a display device using the calibration method, where, in the calibration method for a fingerprint sensor, the fingerprint sensor includes a substrate, a light-blocking layer located on a first surface of the substrate and having openings formed in a light-blocking mask, a light-emitting element layer located on the light-blocking layer and having a plurality of light-emitting elements, and a sensor layer located on a second surface of the substrate and having a plurality of photosensors; and the calibration method includes generating calibration data through white calibration and dark calibration, and applying offsets to the plurality of photosensors using the calibration data.

Calibration method for fingerprint sensor and display device using the same

Provided herein are a calibration method for a fingerprint sensor and a display device using the calibration method, where, in the calibration method for a fingerprint sensor, the fingerprint sensor includes a substrate, a light-blocking layer located on a first surface of the substrate and having openings formed in a light-blocking mask, a light-emitting element layer located on the light-blocking layer and having a plurality of light-emitting elements, and a sensor layer located on a second surface of the substrate and having a plurality of photosensors; and the calibration method includes generating calibration data through white calibration and dark calibration, and applying offsets to the plurality of photosensors using the calibration data.

Depth image acquiring apparatus, control method, and depth image acquiring system

It is intended to promote enhancement of performance of acquiring a depth image. A depth image acquiring apparatus includes a light emitting diode, a TOF sensor, and a filter. The light emitting diode irradiates modulated light toward a detection area becoming an area in which a depth image is to be acquired to detect a distance. The TOF sensor receives incident light into which the light irradiated from the light emitting diode is reflected by an object lying in the detection area to become, thereby outputting a signal used to produce the depth image. The filter passes more light having a wavelength in a predetermined pass bandwidth than light having a wavelength in a pass bandwidth other than the predetermined pass bandwidth of the light made incident toward the TOF sensor. In this case, at least one of the light emitting diode, the TOF sensor, or arrangement of the filter is controlled in accordance with a temperature of the light emitting diode or the TOF sensor. The present technique, for example, can be applied to a system for with international search report acquiring a depth image by using a TOF system.

Depth image acquiring apparatus, control method, and depth image acquiring system

It is intended to promote enhancement of performance of acquiring a depth image. A depth image acquiring apparatus includes a light emitting diode, a TOF sensor, and a filter. The light emitting diode irradiates modulated light toward a detection area becoming an area in which a depth image is to be acquired to detect a distance. The TOF sensor receives incident light into which the light irradiated from the light emitting diode is reflected by an object lying in the detection area to become, thereby outputting a signal used to produce the depth image. The filter passes more light having a wavelength in a predetermined pass bandwidth than light having a wavelength in a pass bandwidth other than the predetermined pass bandwidth of the light made incident toward the TOF sensor. In this case, at least one of the light emitting diode, the TOF sensor, or arrangement of the filter is controlled in accordance with a temperature of the light emitting diode or the TOF sensor. The present technique, for example, can be applied to a system for with international search report acquiring a depth image by using a TOF system.