H04N13/243

NON-RIGID STEREO VISION CAMERA SYSTEM

A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.

Image pickup device and electronic system including the same
11570333 · 2023-01-31 · ·

An image pickup device includes first and second cameras, and first and second image signal processors (ISP). The first camera obtains a first image of an object. The second camera obtains a second image of the object. The first ISP performs a first auto focusing (AF), a first auto white balancing (AWB) and a first auto exposing (AE) for the first camera based on a first region-of-interest (ROI) in the first image, and obtains a first distance between the object and the first camera based on a result of the first AF. The second ISP calculates first disparity information associated with the first and second images based on the first distance, moves a second ROI in the second image based on the first disparity information, and performs a second AF, a second AWB and a second AE for the second camera based on the moved second ROI.

Image pickup device and electronic system including the same
11570333 · 2023-01-31 · ·

An image pickup device includes first and second cameras, and first and second image signal processors (ISP). The first camera obtains a first image of an object. The second camera obtains a second image of the object. The first ISP performs a first auto focusing (AF), a first auto white balancing (AWB) and a first auto exposing (AE) for the first camera based on a first region-of-interest (ROI) in the first image, and obtains a first distance between the object and the first camera based on a result of the first AF. The second ISP calculates first disparity information associated with the first and second images based on the first distance, moves a second ROI in the second image based on the first disparity information, and performs a second AF, a second AWB and a second AE for the second camera based on the moved second ROI.

Autonomous unmanned vehicles for responding to situations

Autonomous unmanned vehicles (UVs) for responding to situations are described. Embodiments include UVs that launch upon detection of a situation, operate in the area of the situation, and collect and send information about the situation. The UVs may launch from a vehicle involved in the situation, a vehicle responding to the situation, or from a fixed station. In other embodiments, the UVs also provide communications relays to the situation and may facilitate access to the situation by responders. The UVs further may act as decoupled sensors for vehicles. In still other embodiments, the collected information may be used to recreate the situation as it happened.

Autonomous unmanned vehicles for responding to situations

Autonomous unmanned vehicles (UVs) for responding to situations are described. Embodiments include UVs that launch upon detection of a situation, operate in the area of the situation, and collect and send information about the situation. The UVs may launch from a vehicle involved in the situation, a vehicle responding to the situation, or from a fixed station. In other embodiments, the UVs also provide communications relays to the situation and may facilitate access to the situation by responders. The UVs further may act as decoupled sensors for vehicles. In still other embodiments, the collected information may be used to recreate the situation as it happened.

Method and apparatus of adaptive infrared projection control

A processor or control circuit of an apparatus receives data of an image based on sensing by one or more image sensors. The processor or control circuit also detects a region of interest (ROI) in the image. The processor or control circuit then adaptively controls a light projector with respect to projecting light toward the ROI.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
20230231983 · 2023-07-20 ·

There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.

SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
20230231983 · 2023-07-20 ·

There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.