Patent classifications
H04N23/80
STITCHING QUALITY ASSESSMENT FOR SURROUND VIEW SYSTEMS
Stitching of multiple images into a composite representation can be performed using a set of stitching parameters determined based, at least in part, upon a subjective stitching quality assessment value. A stitched image can be compared against its constituent images to obtain one or more objective quality metrics. These objective quality metrics can be fed, as input, to a trained classifier, which can infer a subjective quality assessment metric for the stitched (or otherwise composited) image. This subjective quality assessment metric can be used to adjust one or more compositing parameter values in order to provide at least a minimum subjective quality assessment value for composited images.
Image pickup apparatus and information processing apparatus that are capable of automatically adding appropriate rotation matrix during photographing, control method for image pickup apparatus, and storage medium
An apparatus includes a pickup unit, a detecting unit configured to detect an attitude of the pickup unit, a rotation matrix generating unit configured to generate a rotation matrix, an encoder configured to encode image data of the moving image, a selecting unit configured to select one rotation matrix generating method from the plurality of rotation matrix generating methods according to an operation mode of the pickup unit when the moving image is photographed by the pickup unit, and a file generating unit configured to generate a data file by including the rotation matrix, which is generated by means of the selected rotation matrix generating method, in the encoded image data in a format, in which the angle when the reproducing apparatus displays the moving image can be corrected by the rotation matrix.
Image pickup apparatus and information processing apparatus that are capable of automatically adding appropriate rotation matrix during photographing, control method for image pickup apparatus, and storage medium
An apparatus includes a pickup unit, a detecting unit configured to detect an attitude of the pickup unit, a rotation matrix generating unit configured to generate a rotation matrix, an encoder configured to encode image data of the moving image, a selecting unit configured to select one rotation matrix generating method from the plurality of rotation matrix generating methods according to an operation mode of the pickup unit when the moving image is photographed by the pickup unit, and a file generating unit configured to generate a data file by including the rotation matrix, which is generated by means of the selected rotation matrix generating method, in the encoded image data in a format, in which the angle when the reproducing apparatus displays the moving image can be corrected by the rotation matrix.
OBJECT TRACKING BY EVENT CAMERA
A tracking system is disclosed utilizing one or more dynamic vision sensors (e.g., an event camera) configured to generate luminance-transition events associated with a target object, a depth estimation unit configured to generate based on the luminance-transition events depth data/signals indicative of a distance of the target object from the event camera, a spatial tracking unit configured to generate based on the luminance-transition events spatial tracking signals/data indicative of transitions of the target object in a scene of the target object, and an error correction unit configured to process the depth and spatial tracking data/signals and generate error correcting data/signals for the tracking of the target object by the one or more dynamic vision sensors.
IMAGE SENSOR, CAMERA MODULE, AND ELECTRONIC DEVICE INCLUDING THE SAME
An image sensor includes a plurality of pixels arranged in a predetermined aspect ratio to sense an image in an optical axis direction, wherein the predetermined aspect ratio is higher than 4/3 and lower than 16/9.
IMAGING DEVICE AND IMAGING METHOD
An imaging device, according to one embodiment of the present invention, comprises: an input unit for receiving first Bayer data having a first resolution and a noise level; and a convolutional neural network for outputting second Bayer data having a second resolution by using the noise level and the first Bayer data.
IMAGE PROCESSING DEVICE, IMAGE DISPLAY SYSTEM, METHOD, AND PROGRAM
An image processing device of an embodiment includes a control unit that generates a composite image and outputs the composite image to a display device, the composite image being acquired by combination of a first image captured in first exposure time and having first resolution, and a second image that is an image corresponding to a part of a region of the first image, and that is captured in second exposure time shorter than the first exposure time and has second resolution higher than the first resolution, the first image and the second image being input from an image sensor.
IMAGE PROCESSING DEVICE, IMAGE DISPLAY SYSTEM, METHOD, AND PROGRAM
An image processing device of an embodiment includes a control unit that generates a composite image and outputs the composite image to a display device, the composite image being acquired by combination of a first image captured in first exposure time and having first resolution, and a second image that is an image corresponding to a part of a region of the first image, and that is captured in second exposure time shorter than the first exposure time and has second resolution higher than the first resolution, the first image and the second image being input from an image sensor.
System and method for image stitching
A system for stitching images together is disclosed. The images are sometimes referred to as frames, such as frames in a video sequence. The system comprises one or more imagers (e.g. cameras) that work in coordination with a matching amount of custom code modules. The system achieves image stitching using approximately one third the Field of View (FOV) of each imager (camera) and also by increasing the number of imagers to be above a predetermined threshold. The system displays these stitched images or frames on a computer monitor, either in a still-image context but also in a video-context. Normally these tasks would involve a great detail of computation, but the system achieves these effects while managing the computational load. In stitching the images together, it is sometimes necessary to introduce some image distortion (faceting) in the combined image. The system ensures no gaps in any captured view, and assists in achieving full situational awareness for a viewer.
System and method for image stitching
A system for stitching images together is disclosed. The images are sometimes referred to as frames, such as frames in a video sequence. The system comprises one or more imagers (e.g. cameras) that work in coordination with a matching amount of custom code modules. The system achieves image stitching using approximately one third the Field of View (FOV) of each imager (camera) and also by increasing the number of imagers to be above a predetermined threshold. The system displays these stitched images or frames on a computer monitor, either in a still-image context but also in a video-context. Normally these tasks would involve a great detail of computation, but the system achieves these effects while managing the computational load. In stitching the images together, it is sometimes necessary to introduce some image distortion (faceting) in the combined image. The system ensures no gaps in any captured view, and assists in achieving full situational awareness for a viewer.