G06T3/047

Computing system for rectifying ultra-wide fisheye lens images
10846831 · 2020-11-24 · ·

Various technologies described herein pertain to rectification of a fisheye image. A computing system receives the fisheye image. Responsive to receiving the fisheye image, the computing system applies a first lookup function to a first portion of the fisheye image to mitigate spatial distortions of the fisheye image. The computing system also applies a second lookup function to a second portion of the fisheye image to mitigate the spatial distortions. The first lookup function maps first pixels in the first portion to a first rectilinear image corresponding to the first portion when viewed from a first perspective of a first virtual camera. The second lookup function maps second pixels in the second portion to a second rectilinear image corresponding to the second portion when viewed from a second perspective of a second virtual camera. The computing system then outputs the first rectilinear image and the second rectilinear image.

IMAGING SYSTEMS AND METHODS

At least one combined image may be created from a plurality of images captured by a plurality of cameras. A sensor unit may receive the plurality of images from the plurality of cameras. At least one processor in communication with the sensor unit may correlate each received image with calibration data for the camera from which the image was received. The calibration data may comprise camera position data and characteristic data. The processor may combine at least two of the received images from at least two of the cameras into the at least one combined image by orienting the at least two images relative to one another based on the calibration data for the at least two cameras from which the images were received and merging the at least two aligned images into the at least one combined image.

Imaging apparatus, image processing apparatus, image processing method, and medium
10839479 · 2020-11-17 · ·

There is provided with an imaging apparatus. An imaging unit captures an image with use of a fisheye lens. An image conversion unit converts an input image obtained from the imaging unit into a panoramic image, by performing geometrical conversion on the input image such that a region of the input image in which a distance from a point on an optical axis is smaller than a set distance becomes a perspective projection, and such that a region in which the distance is larger than the set distance becomes a stereographic projection. The set distance is determined based on an accuracy of fisheye lens distortion correction with respect to the fisheye lens.

Object tracking device and object tracking method
11869199 · 2024-01-09 · ·

An object tracking device includes a storage unit that stores in advance a reference for a movement amount of an object between frames for each position or area on a fisheye image, a determining unit that determines, based on a position of the object in a first frame image and the reference for a movement amount associated with the position of the object in the first frame image, a position of a search area in a second frame image subsequent to the first frame image, and a search unit that searches the search area in the second frame image for the object to specify a position of the object in the second frame image.

Single image ultra-wide fisheye camera calibration via deep learning
11871110 · 2024-01-09 · ·

Techniques related to calibrating fisheye cameras using a single image are discussed. Such techniques include applying a first pretrained convolutional neural network to an input fisheye image to generate camera model parameters excluding a principle point and applying a second pretrained convolutional neural network to the fisheye image and a difference of the fisheye image and a projection of the fisheye image using the camera model parameters to generate the principle point.

Spherical coordinates calibration method for linking spherical coordinates to texture coordinates

A calibration method for linking spherical coordinates to texture coordinates is provided. The method comprises: installing a plurality of lamps forming a horizontal semicircle arc and a rotation equipment located at its circle center; mounting a N-lens camera on the rotation equipment; causing the N-lens camera to spin about a spin axis passing through two ends of the horizontal semicircle arc and capture a plurality of lens images for different spin angles by the rotation equipment; and, determining longitude and latitude coordinates of a plurality of calibration points according to the different spin angles and the texture coordinates of the calibration points in the lens images to create a link between the spherical coordinates and the texture coordinates. Different positions of the lamps respectively represent different latitudes and different spin angles respectively represent different longitudes. Heights of the camera and the lamps are the same.

Method and apparatus for generating indoor panoramic video

Embodiments of the present application disclose a method and apparatus for generating an indoor panoramic video. For each of frames of the fish-eye video, coordinates of each of pixels of this frame in an image coordinate system are converted into coordinates in a spherical coordinate system to obtain a spherical coordinate system-based hemispherical fish-eye image. The frustum parameters of each of the N texture images of N viewing angles for the hemispherical fish-eye image are determined according to a shape of a preset living room. Based on the frustum parameters of each of the N texture images of N viewing angles, the N texture images of N viewing angles for the hemispherical fish-eye image are obtained. The N texture images of N viewing angles are rendered onto the N faces inside the preset living room, to generate the panoramic video image corresponding to the frame. As such, in the embodiment of the present application, a panoramic video image having a stereoscopic effect can be generated. The real-time performance for generating a panoramic video is improved, as no complicated image stitching algorithm is used. In addition, the cost for camera devices can be reduced, as there is no need for several cameras or an aerial camera.

Circular fisheye camera array rectification
10825131 · 2020-11-03 · ·

Techniques related to image rectification for fisheye images are discussed. Such techniques may include iteratively determining warpings for equirectangular images corresponding to the fisheye images using alternating feature mappings between neighboring ones of the equirectangular images until mean vertical disparities between the features are reduced below a threshold, and warping the equirectangular images to rectified equirectangular images.

Virtual lens simulation for video and photo cropping

In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.

Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
10817976 · 2020-10-27 · ·

Systems and methods for modifying image distortion (curvature) for viewing distance in post capture. Presentation of imaging content on a content display device may be characterized by a presentation field of view (FOV). Presentation FOV may be configured based on screen dimensions of the display device and distance between the viewer and the screen. Imaging content may be obtained by an activity capture device characterized by a wide capture field of view lens (e.g., fish-eye). Images may be transformed into rectilinear representation for viewing. When viewing images using presentation FOV that may narrower than capture FOV, transformed rectilinear images may appear distorted. A transformation operation may be configured to account for presentation FOV-capture FOV mismatch. In some implementations, the transformation may include fish-eye to rectilinear transformation characterized by a transformation strength that may be configured based on a ratio of the presentation FOV to the capture FOV.