H04N23/16

SYSTEMS AND METHODS FOR GENERATING A DIGITAL IMAGE
20180109771 · 2018-04-19 ·

A system, method, and computer program product for generating a digital image is disclosed. In use, a first image set is captured by a first image sensor, the first image set including two or more first source images and a plurality of chrominance values, and a second image set is captured by a second image sensor, the second image set including two or more second source images and a plurality of luminance values. Next, a first image of the first source images and a second image of the first source images are combined to form a first pair of source images, and a first image of the second source images and a second image of the second source images are combined to form a second pair of source images. Additionally, a first resulting image by is generated combining the first pair of source images with the second pair of source images. Additional systems, methods, and computer program products are also presented.

DEVICE FOR DETECTING AND TRACKING OBJECTS IN A ZONE OF INTEREST
20240397171 · 2024-11-28 ·

A device for detecting and tracking objects in an area of interest, including: a control terminal receiving direction information relating to objects entering the area of interest from a radar or a direct input from an operator, a motorized yoke having 3 axes of rotation with direct axial drive for the support and the omnidirectional displacement of an assembly comprising: a neuromorphic camera, an IR laser module including a pulsed infrared laser source to illuminate the object to be identified, and an image processing module for obtaining, from an IR image received by the neuromorphic camera, event images of each of the objects present in the area of interest, for locating and identifying these objects and for controlling the orientation of the motorized yoke in order to track their respective displacement in the area of interest on a display screen of the control terminal.

DEVICE FOR DETECTING AND TRACKING OBJECTS IN A ZONE OF INTEREST
20240397171 · 2024-11-28 ·

A device for detecting and tracking objects in an area of interest, including: a control terminal receiving direction information relating to objects entering the area of interest from a radar or a direct input from an operator, a motorized yoke having 3 axes of rotation with direct axial drive for the support and the omnidirectional displacement of an assembly comprising: a neuromorphic camera, an IR laser module including a pulsed infrared laser source to illuminate the object to be identified, and an image processing module for obtaining, from an IR image received by the neuromorphic camera, event images of each of the objects present in the area of interest, for locating and identifying these objects and for controlling the orientation of the motorized yoke in order to track their respective displacement in the area of interest on a display screen of the control terminal.

Camera system of mobile device including geometry phase lens

A camera system of a mobile device includes: a sensor module disposed in a first body connected to a rotation member of the mobile device; and a lens module disposed in a second body connected to the rotation member. When the first body and the second body are rotated with respect to the rotation member to overlap each other, optical axes of the sensor module and the lens module correspond to each other and are operated as a common camera system, and the common camera system provides a first photographing mode and a second photographing mode with different viewing angles based on two focuses generated by a first geometry phase lens included in the lens module.

Camera system of mobile device including geometry phase lens

A camera system of a mobile device includes: a sensor module disposed in a first body connected to a rotation member of the mobile device; and a lens module disposed in a second body connected to the rotation member. When the first body and the second body are rotated with respect to the rotation member to overlap each other, optical axes of the sensor module and the lens module correspond to each other and are operated as a common camera system, and the common camera system provides a first photographing mode and a second photographing mode with different viewing angles based on two focuses generated by a first geometry phase lens included in the lens module.

WEARABLE ELECTRONIC DEVICE THAT TRACKS GAZE AND FACE
20240380875 · 2024-11-14 ·

A wearable electronic device is provided. The wearable electronic device includes a processor and memory communicatively coupled to the processor. The wearable electronic device includes a first tracking device including first lights corresponding to a first area of a user wearing the wearable electronic device and first cameras corresponding to the first area, and a second tracking device including second lights corresponding to a second area of the user and second cameras corresponding to the second area. The memory store one or more computer programs including computer-executable instructions that, when executed by the processor, cause the wearable electronic device to generate a first signal related to the exposure of a first primary camera among the first cameras before an exposure time of the first primary camera and input the generated first signal as a signal notifying the second cameras of the start of a frame.

WEARABLE ELECTRONIC DEVICE THAT TRACKS GAZE AND FACE
20240380875 · 2024-11-14 ·

A wearable electronic device is provided. The wearable electronic device includes a processor and memory communicatively coupled to the processor. The wearable electronic device includes a first tracking device including first lights corresponding to a first area of a user wearing the wearable electronic device and first cameras corresponding to the first area, and a second tracking device including second lights corresponding to a second area of the user and second cameras corresponding to the second area. The memory store one or more computer programs including computer-executable instructions that, when executed by the processor, cause the wearable electronic device to generate a first signal related to the exposure of a first primary camera among the first cameras before an exposure time of the first primary camera and input the generated first signal as a signal notifying the second cameras of the start of a frame.

CONFIGURABLE PLATFORM

Provided herein are systems for fluorescence imaging of an object and methods of use thereof, the systems comprising: an image sensor assembly comprising at least one image sensor; and an optical assembly configured to transmit light emitted from the object to the image sensor assembly, the optical assembly comprising at least one notch filter configured to block light in a plurality of fluorescence excitation wavebands while transmitting fluorescence light that is emitted from the object, wherein the optical assembly is configured to project the emitted fluorescence light as one or more fluorescence images onto the at least one image sensor of the image sensor assembly.

CONFIGURABLE PLATFORM

Provided herein are systems for fluorescence imaging of an object and methods of use thereof, the systems comprising: an image sensor assembly comprising at least one image sensor; and an optical assembly configured to transmit light emitted from the object to the image sensor assembly, the optical assembly comprising at least one notch filter configured to block light in a plurality of fluorescence excitation wavebands while transmitting fluorescence light that is emitted from the object, wherein the optical assembly is configured to project the emitted fluorescence light as one or more fluorescence images onto the at least one image sensor of the image sensor assembly.

METHOD AND APPARATUS FOR LEARNING HUMAN POSE ESTIMATION IN LOW-LIGHT CONDITIONS

Provided is an apparatus for learning human pose estimation by configuring a dataset for human pose estimation by simultaneously obtaining a well-lit image and a low-light image, performing annotation in the well-lit image, and transmitting the annotation to the low-light image. By using the well-lit image included in the dataset as an input of a teacher model and the low-light image as an input of a student model, the student model learns human pose estimation at a high accuracy in low-light conditions by using privileged information of the teacher model.