G01S3/00

Method and apparatus for mitigating a change in an orientation of an antenna

Aspects of the subject disclosure may include a system configured for receiving sensing data from a orientation detector coupled to an antenna, determining, according to the sensing data, that an aperture of the antenna has shifted from a first orientation to a second orientation, and configuring a transmitter to generate an adjusted electromagnetic wave that is supplied to a feed point of the antenna for offsetting the shift in orientation of the antenna. Other embodiments are disclosed.

DIGITAL PIXEL WITH EXTENDED DYNAMIC RANGE
20210029286 · 2021-01-28 ·

Examples of an apparatus are disclosed. In some example, an apparatus includes a photodiode configured to generate a charge in response to incident light; a measurement capacitor to store at least a part of the charge to generate a voltage; and an analog-to-digital converter (ADC) circuit configured to: in a first measurement period, compare the voltage at the measurement capacitor against a static threshold voltage to generate a first output; in a second measurement period, compare the voltage against a varying threshold voltage to generate a second output, wherein the varying threshold voltage varies with time according to a pre-determined pattern; and generate a final output representing an intensity of the incident light based on either the first output or the second output.

DIGITAL PIXEL WITH EXTENDED DYNAMIC RANGE
20210029286 · 2021-01-28 ·

Examples of an apparatus are disclosed. In some example, an apparatus includes a photodiode configured to generate a charge in response to incident light; a measurement capacitor to store at least a part of the charge to generate a voltage; and an analog-to-digital converter (ADC) circuit configured to: in a first measurement period, compare the voltage at the measurement capacitor against a static threshold voltage to generate a first output; in a second measurement period, compare the voltage against a varying threshold voltage to generate a second output, wherein the varying threshold voltage varies with time according to a pre-determined pattern; and generate a final output representing an intensity of the incident light based on either the first output or the second output.

Image pickup device and method of tracking subject thereof

The present invention provides an image pickup device that recognizes the object that the user is attempting to capture as the subject, tracks the movement of that subject, and can continue tracking the movement of the subject even when the subject leaves the capturing area so that the subject can always be reliably brought into focus. The image pickup device includes a main camera that captures the subject; an EVF that displays the captured image captured by the main camera, a sub-camera that captures the subject using a wider capturing region than the main camera, and a processing unit that extracts the subject from the captured images captured by the main camera and the sub-camera, tracks the extracted subject, and brings the subject into focus when an image of the subject is actually captured. When the subject moves outside of a capturing region of the main camera, the processing unit tracks the subject extracted from the captured image captured by the sub-camera.

Image pickup device and method of tracking subject thereof

The present invention provides an image pickup device that recognizes the object that the user is attempting to capture as the subject, tracks the movement of that subject, and can continue tracking the movement of the subject even when the subject leaves the capturing area so that the subject can always be reliably brought into focus. The image pickup device includes a main camera that captures the subject; an EVF that displays the captured image captured by the main camera, a sub-camera that captures the subject using a wider capturing region than the main camera, and a processing unit that extracts the subject from the captured images captured by the main camera and the sub-camera, tracks the extracted subject, and brings the subject into focus when an image of the subject is actually captured. When the subject moves outside of a capturing region of the main camera, the processing unit tracks the subject extracted from the captured image captured by the sub-camera.

Polarization illumination using acousto-optic structured light in 3D depth sensing
10904514 · 2021-01-26 · ·

A depth camera assembly (DCA) includes a polarized structured light generator, an imaging device and a controller. The structured light generator illuminates a local area with one or more polarized structured light patterns in accordance with emission instructions from the controller. The structured light generator comprises an illumination source, an acousto-optic device, and a polarizing element. The acousto-optic device generates a structured light pattern from an optical beam emitted from the illumination source. The polarizing element generates the one or more polarized structured light patterns using the structured light pattern. The imaging device captures portions of the one or more polarized structured light patterns scattered or reflected from the local area. The controller determines depth information, degree of polarization and index of refraction map for the local area based at least in part on the captured portions of the one or more scattered or reflected polarized structured light patterns.

Polarization illumination using acousto-optic structured light in 3D depth sensing
10904514 · 2021-01-26 · ·

A depth camera assembly (DCA) includes a polarized structured light generator, an imaging device and a controller. The structured light generator illuminates a local area with one or more polarized structured light patterns in accordance with emission instructions from the controller. The structured light generator comprises an illumination source, an acousto-optic device, and a polarizing element. The acousto-optic device generates a structured light pattern from an optical beam emitted from the illumination source. The polarizing element generates the one or more polarized structured light patterns using the structured light pattern. The imaging device captures portions of the one or more polarized structured light patterns scattered or reflected from the local area. The controller determines depth information, degree of polarization and index of refraction map for the local area based at least in part on the captured portions of the one or more scattered or reflected polarized structured light patterns.

Positional zero latency

Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.

Positional zero latency

Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.

AUTOMATIC EXPOSURE MODULE FOR AN IMAGE ACQUISITION SYSTEM
20200404149 · 2020-12-24 · ·

A method for automatically determining exposure settings for an image acquisition system comprises maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system. An image of a subject is acquired from a camera module; and a light condition occurring during the acquisition is determined based on the acquired image. A distance between the subject and the camera module during the acquisition is calculated. The method then determines whether a correction of the image exposure settings for the camera module is required based on the calculated distance and the determined light condition; and responsive to correction being required: selects image exposure settings corresponding to the calculated distance from the look-up table corresponding to the determined light condition; and acquires a new image using the selected image exposure settings.