Patent classifications
H04N9/76
Apparatuses, systems and methods for generating color video with a monochrome sensor
Apparatuses, systems and methods for generating color video with a monochrome sensor include the acts of (i) selectively energizing each of a plurality of light sources in a sequence, (ii) capturing a monochrome image of the illuminated sample at a monochrome sensor at each stage of the sequence, and (iii) generating a color video from the monochrome images. The sequence can have a series of stages with each stage of the sequence corresponding to activation of a different wavelength of light from the light sources to illuminate a sample. Generating the monochrome video can include the acts of compiling a plurality of monochrome images captured at the monochrome sensor with a single light source into a series of monochrome video frames comprising the monochrome video.
Apparatuses, systems and methods for generating color video with a monochrome sensor
Apparatuses, systems and methods for generating color video with a monochrome sensor include the acts of (i) selectively energizing each of a plurality of light sources in a sequence, (ii) capturing a monochrome image of the illuminated sample at a monochrome sensor at each stage of the sequence, and (iii) generating a color video from the monochrome images. The sequence can have a series of stages with each stage of the sequence corresponding to activation of a different wavelength of light from the light sources to illuminate a sample. Generating the monochrome video can include the acts of compiling a plurality of monochrome images captured at the monochrome sensor with a single light source into a series of monochrome video frames comprising the monochrome video.
METHOD OF LAYER BLENDING AND RECONSTRUCTION BASED ON THE ALPHA CHANNEL
A device and method for blending image data and the alpha channel and reconstructing to obtain the image again after transmission. An encoder blends the alpha channel and the red-green-blue (RGB) image data through a layer blending method that is supported by the general application processor (AP). Transporter image data, such as a checkerboard pattern, is blended with 1-alpha channel data to obtain a transporter. The blended image data (RGB+alpha) is mixed with the transporter to obtain mixed image data. The mixed image data is then transmitted through the existing transmission interface. After receiving the mixed image data, reconstruction processing is performed by a decoder to obtain the original RGB image data and the alpha channel again.
METHOD OF LAYER BLENDING AND RECONSTRUCTION BASED ON THE ALPHA CHANNEL
A device and method for blending image data and the alpha channel and reconstructing to obtain the image again after transmission. An encoder blends the alpha channel and the red-green-blue (RGB) image data through a layer blending method that is supported by the general application processor (AP). Transporter image data, such as a checkerboard pattern, is blended with 1-alpha channel data to obtain a transporter. The blended image data (RGB+alpha) is mixed with the transporter to obtain mixed image data. The mixed image data is then transmitted through the existing transmission interface. After receiving the mixed image data, reconstruction processing is performed by a decoder to obtain the original RGB image data and the alpha channel again.
System and method for generating video content with hue-preservation in virtual production
A system is provided for generating video content with hue-preservation in virtual production. The system comprises a memory for storing instructions and a processor configured to execute the instructions. Based on the executed instructions, the processor is further configured to control a saturation of scene linear data based on mapping of a first color gamut corresponding to a first encoding format of raw data to a second color gamut corresponding to a defined color space. The processor is further configured to determine a standard dynamic range (SDR) video content in the defined color space based on the scene linear data. Based on a scaling factor that is applied to three primary color values that describe the first color gamut, hue of the SDR video content is preserved.
SYSTEMS AND METHODS FOR HDR VIDEO CAPTURE WITH A MOBILE DEVICE
The invention is relates to systems and methods for high dynamic range (HDR) image capture and video processing in mobile devices. Aspects of the invention include a mobile device, such as a smartphone or digital mobile camera, including at least two image sensors fixed in a co-planar arrangement to a substrate and an optical splitting system configured to reflect at least about 90% of incident light received through an aperture of the mobile device onto the co-planar image sensors, to thereby capture a HDR image. In some embodiments, greater than about 95% of the incident light received through the aperture of the device is reflected onto the image sensors.
SYSTEMS AND METHODS FOR HDR VIDEO CAPTURE WITH A MOBILE DEVICE
The invention is relates to systems and methods for high dynamic range (HDR) image capture and video processing in mobile devices. Aspects of the invention include a mobile device, such as a smartphone or digital mobile camera, including at least two image sensors fixed in a co-planar arrangement to a substrate and an optical splitting system configured to reflect at least about 90% of incident light received through an aperture of the mobile device onto the co-planar image sensors, to thereby capture a HDR image. In some embodiments, greater than about 95% of the incident light received through the aperture of the device is reflected onto the image sensors.
Systems and methods for digital photography
A system, method, and computer program product are provided for rendering a combined image. In use, two or more source images including at least one strobe image and at least one ambient image are loaded. A pixel-level correction is estimated for at least one of the two or more source images based on a pixel level correction function. At least one pixel of the two or more source images is color-corrected based on the pixel-level correction. A first blend weight associated with the two or more source images is initialized, and a first combined image from the two or more source images is rendered based on the color-correction and the first blend weight. Additional systems, methods, and computer program products are also presented.
Systems and methods for digital photography
A system, method, and computer program product are provided for rendering a combined image. In use, two or more source images including at least one strobe image and at least one ambient image are loaded. A pixel-level correction is estimated for at least one of the two or more source images based on a pixel level correction function. At least one pixel of the two or more source images is color-corrected based on the pixel-level correction. A first blend weight associated with the two or more source images is initialized, and a first combined image from the two or more source images is rendered based on the color-correction and the first blend weight. Additional systems, methods, and computer program products are also presented.
Method for displaying a model of a surrounding area, control unit and vehicle
A method, including recording a first and a second camera image; the first camera image and the second camera image having an overlap region. The method includes: assigning pixels of the first camera image and pixels of the second camera image to predefined points of a three-dimensional lattice structure, the predefined points being situated in a region of the three-dimensional lattice structure, which represents the overlap region; ascertaining a color information item difference for each predefined point as a function of the assigned color information items; ascertaining a quality value as a function of the ascertained color information item difference at the specific, predefined point; determining a global color transformation matrix as a function of the color information item differences, weighted as a function of the corresponding quality value; and adapting the second camera image as a function of the determined color transformation matrix.