H04N13/239

Avian detection systems and methods

Provided herein are detection systems and related methods for detecting moving objects in an airspace surrounding the detection system. In an aspect, the moving object is a flying animal and the detection system comprises a first imager and a second imager that determines position of the moving object and for moving objects within a user selected distance from the system the system determines whether the moving object is a flying animal, such as a bird or bat. The systems and methods are compatible with wind turbines to identify avian(s) of interest in airspace around wind turbines and, if necessary, take action to minimize avian strike by a wind turbine blade.

Utilizing dual cameras for continuous camera capture
11575877 · 2023-02-07 · ·

An eyewear device that adjusts an on time and an off time of a pair of cameras to control heat of the cameras and of the eyewear device. Each of the pair of cameras has a duty cycle determining when the respective camera is on and off. A camera control chart contains the duty cycles. The eyewear may have a temperature sensor such that the on and off times of the cameras are a function of the temperature sensor.

Utilizing dual cameras for continuous camera capture
11575877 · 2023-02-07 · ·

An eyewear device that adjusts an on time and an off time of a pair of cameras to control heat of the cameras and of the eyewear device. Each of the pair of cameras has a duty cycle determining when the respective camera is on and off. A camera control chart contains the duty cycles. The eyewear may have a temperature sensor such that the on and off times of the cameras are a function of the temperature sensor.

Method for image processing of image data for image and visual effects on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.

Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences

In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.

Structured light projector and electronic device including the same

Provided is a structured light projector including a light source configured to emit light, and a nanostructure array configured to form a dot pattern based on the light emitted by the light source, the nanostructure array including a plurality of super cells each respectively including a plurality of nanostructures, wherein each of the plurality of super cells includes a first sub cell that includes a plurality of first nanostructures having a first shape distribution and a second sub cell that includes a plurality of second nanostructures having a second shape distribution.

AUGMENTED VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED VISIBLE IMAGE COMBINED WITH A FLUORESCENCE IMAGE AND A CAPTURED VISIBLE IMAGE

An endoscope with an optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image at a first time and (2) a visible second image combined with a fluorescence image at a second time. An image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence image and generates at least one fluorescence image. A display system outputs an output image including an artificial fluorescence image.

SELF-RECTIFICATION OF STEREO CAMERA
20180007345 · 2018-01-04 ·

Embodiments include a method for self-rectification of a stereo camera, wherein the stereo camera comprises a first camera and a second camera, the method comprises creating image pairs from a first images taken by the first camera and second images taken by the second camera, respectively, such that each image pair comprises two images taken at essentially the same time by the first camera and the second camera, respectively. The method comprises creating, for each image pair, matching point pairs from corresponding points in the two images of each image pair, such that each matching point pair comprises one point from each of the first and second image of the respective image pair. For each matching point pair, a disparity is calculated and a plurality of disparities is created for each image pair, and the resulting plurality of disparities is taken into account for the self-rectification.

MODULAR CAMERA BLOCKS FOR VIRTUAL REALITY CAPTURE

An apparatus comprises: a camera module for obtaining a first image, the camera module having at least one port, each of the at least one ports being associated with an attachment position for receiving a second camera module for obtaining a second image; a processor for detecting a position of a second camera module and providing, to an image processing controller, information relating to at least one of the position of the second camera module and the first image obtained by the camera module; and a memory for storing the information relating to at least one of the position of the second camera module and the first image obtained by the camera module.

Image Processing Apparatus, Image Processing Method, and Image Communication System

Methods and apparatus provide for: capturing an image of an object, which includes a face of a person wearing an optical display apparatus by which to observe a stereoscopic image that contains a first parallax image and a second parallax image obtained when the object in a three-dimensional (3D) space is viewed from different viewpoints; identifying the optical display apparatus included in the image of the object; and generating an image of the face of the person that does not include the optical display apparatus by excluding the identified optical display apparatus, and instead by adding features of the face of the person to a region in which the identified optical display apparatus is excluded.