H04N9/797

SURGICAL MICROSCOPE HAVING A CONNECTION REGION FOR ATTACHING A PROTECTIVE GLASS MODULE
20230093637 · 2023-03-23 ·

A surgical microscope includes an image capture unit having an image sensor, a detection beam path, an image evaluation unit, a connection region for attaching a protective glass module with an objective protective glass. The image sensor has a detection region which has a used detection region for capturing the object region, and a partial detection region, which is not assigned to the used detection region. The image capture unit is configured such that, when the protective glass module with the objective protective glass is arranged at the connection region, a detail of the protective glass module with the objective protective glass is capturable by the partial detection region of the image sensor. The image evaluation unit is configured to generate a signal when an objective protective glass is detectable by the evaluation of the image data of the partial detection region of the image sensor.

SURGICAL MICROSCOPE HAVING A CONNECTION REGION FOR ATTACHING A PROTECTIVE GLASS MODULE
20230093637 · 2023-03-23 ·

A surgical microscope includes an image capture unit having an image sensor, a detection beam path, an image evaluation unit, a connection region for attaching a protective glass module with an objective protective glass. The image sensor has a detection region which has a used detection region for capturing the object region, and a partial detection region, which is not assigned to the used detection region. The image capture unit is configured such that, when the protective glass module with the objective protective glass is arranged at the connection region, a detail of the protective glass module with the objective protective glass is capturable by the partial detection region of the image sensor. The image evaluation unit is configured to generate a signal when an objective protective glass is detectable by the evaluation of the image data of the partial detection region of the image sensor.

Determination of luminance values using image signal processing pipeline

Apparatuses, systems, and techniques to receive, at one or more processors associated with an image signal processing (ISP) pipeline for a camera, an image generated using an image sensor of the camera, wherein the image comprises a plurality of channels associated with color information of the image; process, by the one or more processors, the plurality of channels of the image to generate a plurality of luminance and/or radiance values; generate, by the one or more processors, an updated version of the image using the plurality of luminance and/or radiance values; and output the updated version of the image.

Auto White Balance Adjusting Method and Auto White Balance Adjusting System for Calibrating Images by Using Dual Color Spaces

An auto white balance adjusting method includes determining a white pixel area according to a standard of a first color space, selecting a plurality of pixels of an image according to the white pixel area, generating an average color value of the plurality of pixels in the first color space, converting the average color value in the first color space to three primary color gains in a second color space, generating three primary color target gains according to the three primary color gains and a color temperature curve, and gradually adjusting a white balance of the image to meet the three primary color target gains according to the average color value in the first color space and the three primary color gains in the second color space. The first color space and the second color space are different.

Image sensor and driving method thereof

An image sensor includes a first unit pixel including a first sub-pixel and a second sub-pixel, a second unit pixel including a third sub-pixel and a fourth sub-pixel, a timing controller configured to apply a first effective integration time to the first sub-pixel and the fourth sub-pixel, such that a first sensing signal and a fourth sensing signal are generated from the first sub-pixel and the fourth sub-pixel, respectively, and to apply a second effective integration time shorter than the first effective integration time to the second sub-pixel and the third sub-pixel, such that a second sensing signal and a third sensing signal are generated from the second sub-pixel and the third sub-pixel, respectively, and an analog-to-digital converter configured to perform an averaging operation on the first sensing signal and the fourth sensing signal or on the second sensing signal and the third sensing signal.

Auto white balance adjusting method and auto white balance adjusting system for calibrating images by using dual color spaces

An auto white balance adjusting method includes determining a white pixel area according to a standard of a first color space, selecting a plurality of pixels of an image according to the white pixel area, generating an average color value of the plurality of pixels in the first color space, converting the average color value in the first color space to three primary color gains in a second color space, generating three primary color target gains according to the three primary color gains and a color temperature curve, and gradually adjusting a white balance of the image to meet the three primary color target gains according to the average color value in the first color space and the three primary color gains in the second color space. The first color space and the second color space are different.

IMAGE SENSOR
20220232179 · 2022-07-21 ·

An image sensor includes a first unit pixel including a first sub-pixel and a second sub-pixel, a second unit pixel including a third sub-pixel and a fourth sub-pixel, a timing controller configured to apply a first effective integration time to the first sub-pixel and the fourth sub-pixel, such that a first sensing signal and a fourth sensing signal are generated from the first sub-pixel and the fourth sub-pixel, respectively, and to apply a second effective integration time shorter than the first effective integration time to the second sub-pixel and the third sub-pixel, such that a second sensing signal and a third sensing signal are generated from the second sub-pixel and the third sub-pixel, respectively, and an analog-to-digital converter configured to perform an averaging operation on the first sensing signal and the fourth sensing signal or on the second sensing signal and the third sensing signal.

Apparatus and method for recording and storing video

A video recording device includes an image capturing module, a storage unit and a processing unit. The image capturing module is configured to capture a raw video in a first video format. The processing unit is configured to convert video session(s) other than the lately captured video content from the first video format into a second video format. The second video format is inferior than the first video format.

Apparatus and method for recording and storing video

A video recording device includes an image capturing module, a storage unit and a processing unit. The image capturing module is configured to capture a raw video in a first video format. The processing unit is configured to convert video session(s) other than the lately captured video content from the first video format into a second video format. The second video format is inferior than the first video format.

Apparatus and methods for tracking salient features

Apparatus and methods for detecting and utilizing saliency in digital images. In one implementation, salient objects may be detected based on analysis of pixel characteristics. Least frequently occurring pixel values may be deemed as salient. Pixel values in an image may be compared to a reference. Color distance may be determined based on a difference between reference color and pixel color. Individual image channels may be scaled when determining saliency in a multi-channel image. Areas of high saliency may be analyzed to determine object position, shape, and/or color. Multiple saliency maps may be additively or multiplicative combined in order to improve detection performance (e.g., reduce number of false positives). Methodologies described herein may enable robust tracking of objects utilizing fewer determination resources. Efficient implementation of the methods described below may allow them to be used for example on board a robot (or autonomous vehicle) or a mobile determining platform.