Patent classifications
H04N25/61
CORRECTING DISTORTION FROM CAMERA PITCH ANGLE
One disclosed example provides a videoconferencing system comprising a processor and a storage device storing instructions executable by the processor to obtain an image of a scene acquired via a camera, the image of the scene comprising image distortion arising from a camera pitch angle at which the image of the scene was acquired. The instructions are further executable to apply a projection mapping to the image of the scene to map the image of the scene to a projection comprising a tilt parameter that is based upon the camera pitch angle at which the image of the scene was acquired, thereby obtaining a corrected image, and output the corrected image.
Photographing system and photographing system control method
Provided is an image sensor including a pixel array formed by arranging pixels, which generate an electrical signal in response to light; a memory for storing a register value of the image sensor; and a sensor controller for configuring the register value, wherein the register value includes information for defining a region to be processed in the pixel array, and when a change request for changing at least one of a position and a size of the region to be processed is received, the sensor controller provides a register modification command for adjusting the register value so as to correspond to the change request to the image sensor or the memory.
IMAGING SYSTEM AND METHOD OF CREATING COMPOSITE IMAGES
An imaging system and a method of creating composite images are provided. The imaging system includes one or more lens assemblies coupled to a sensor. When reflected light from an object enters the imaging system, incident light on the metalens filter systems creates filtered light, which is turned into composite images by the corresponding sensors. Each metalens filter system focuses the light into a specific wavelength, creating the metalens images. The metalens images are sent to the processor, wherein the processor combines the metalens images into one or more composite images. The metalens images are combined into a composite image, and the composite image has reduced chromatic aberrations.
Medical imaging device with multiple imaging modes
Improved fluorescent imaging and other sensor data imaging processes, including hyperspectral imaging, devices, and systems are provided to enhance endoscopes with multiple wavelength capabilities and providing sequential imaging and display. A first optical device is provided for endoscopy imaging in a white light and a fluoresced light mode with an imaging unit including one or more image sensors. A mechanism in the first optical device to automatically adjust the focus of the first optical device using one or more deformable, variable-focus lenses, wherein the automatic focus adjustment compensates for a chromatic focal difference between the light collected at distinct wavelength bands caused by the dispersive or diffractive properties of the optical materials or optical design employed in the construction of the first or second optical devices, or both. Further variable spectrum imaging is enhanced with the use of adjustable spectral filters.
Camera with reconfigurable lens assembly
Systems and processes for cameras with a reconfigurable lens assembly are described. For example, some methods include automatically detecting that an accessory lens structure has been mounted to an image capture device including a mother lens and an image sensor configured to detect light incident through the mother lens, such that an accessory lens of the accessory lens structure is positioned covering the mother lens; responsive to detecting that the accessory lens structure has been mounted, automatically identifying the accessory lens from among a set of multiple supported accessory lenses; accessing an image captured using the image sensor when the accessory lens structure is positioned covering the mother lens; determining a warp mapping based on identification of the accessory lens; applying the warp mapping to the image to obtain a warped image; and transmitting, storing, or displaying an output image based on the warped image.
DEPTH ACQUISITION DEVICE AND DEPTH ACQUISITION METHOD
A depth acquisition device includes a memory and a processor. The processor performs: acquiring timing information indicating a timing at which a light source irradiates a subject with infrared light; acquiring, from the memory, an infrared light image generated by imaging a scene including the subject with the infrared light according to the timing indicated by the timing information; acquiring, from the memory, a visible light image generated by imaging a substantially same scene as the scene of the infrared light image, with visible light from a substantially same viewpoint as a viewpoint of imaging the infrared light image at a substantially same time as a time of imaging the infrared light image; detecting a flare region from the infrared light image; and estimating a depth of the flare region based on the infrared light image, the visible light image, and the flare region.
Methods and apparatus for using a controllable physical light filter as part of an image capture system and for processing captured images
Methods and apparatus for using a controllable filter, e.g., an liquid crystal panel, in front of a camera are described. The filter is controlled based on the luminosity of object in a scene being captured by the camera to reduce or eliminate luminosity related image defects such as flaring, blooming or ghosting. Multiple cameras and filters can be used to capture multiple images as part of a depth determination processes where pixel values captured by cameras at different locations are matched to determine the depth, e.g., distance from the camera or camera system to object in the environment. Pixel values are normalized in some embodiments based on the amount of filtering applied to a sensor region and sensor exposure time. The filtering allows for regional sensor exposure control at an individual camera even though the overall exposure time of the pixel sensors may be and often will be the same.
METHOD OF DIGITALLY PROCESSING A PLURALITY OF PIXELS AND TEMPERATURE MEASUREMENT APPARATUS
A method of digitally processing a plurality of pixels of an image captured using an array of sensing pixels of an optical sensor device. The method comprises identifying a measurement pixel of the plurality of pixels corresponding to a measurement point on a target to be measured. The method then comprises identifying a number of pixels of the plurality of pixels neighbouring the measurement pixel, the number of pixels having a number of intensity values, respectively. A curve is then fitted to the number of pixels and the number of respective intensity values. An estimated intensity value is then determined from the curve in respect of the measurement pixel, thereby simulating a predetermined field of view in respect of the measurement pixel narrower than an actual field of view of the measurement pixel.
Image processing apparatus and image processing method to generate and display an image based on a vehicle movement
An image processing apparatus is provided. An image photographed by a camera that photographs the surroundings of the vehicle is received. An overhead image, a vehicle travel direction image, and a vehicle lateral side image corresponding to a vehicle movement direction which is either left or right of the vehicle is generated based on the image. The generated image is displayed on a display unit. For example, a display image is generated or updated in accordance with a travel direction of the vehicle and direction-of-rotation information of a steering wheel. In a case where the vehicle is backed counter-clockwise, the overhead image, the vehicle rear image, and a vehicle left side image are generated and displayed. In a case where the vehicle is backed clockwise, the overhead image, the vehicle rear image, and a vehicle right side image are generated and displayed.
Wide-angle stereoscopic vision with cameras having different parameters
A stereoscopic vision system uses at least two cameras having different parameters to image a scene and create stereoscopic views. The different parameters of the two cameras can be intrinsic or extrinsic, including, for example, the distortion profile of the lens in the cameras, the field of view of the lens, the orientation of the cameras, the positions of the cameras, the color spectrum of the cameras, the frame rate of the cameras, the exposure time of the cameras, the gain of the cameras, the aperture size of the lenses, or the like. An image processing apparatus is then used to process the images from the at least two different cameras to provide optimal stereoscopic vision to a display.