Patent classifications
G06T3/12
Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
Systems and methods for modifying image distortion (curvature) for viewing distance in post capture. Presentation of imaging content on a content display device may be characterized by a presentation field of view (FOV). Presentation FOV may be configured based on screen dimensions of the display device and distance between the viewer and the screen. Imaging content may be obtained by an activity capture device characterized by a wide capture field of view lens (e.g., fish-eye). Images may be transformed into rectilinear representation for viewing. When viewing images using presentation FOV that may narrower than capture FOV, transformed rectilinear images may appear distorted. A transformation operation may be configured to account for presentation FOV-capture FOV mismatch. In some implementations, the transformation may include fish-eye to rectilinear transformation characterized by a transformation strength that may be configured based on a ratio of the presentation FOV to the capture FOV.
Methods, systems, and devices for adjusting image content for streaming panoramic video content
Aspects of the subject disclosure may include, for example, obtaining image content over a communication network, determining a predicted viewpoint of a user associated with the image content, and adjusting the image content to equirectangular image content according to the predicted viewpoint. Further aspects can include downscaling the equirectangular image content according to a display capability of a mobile device resulting in a downscaled equirectangular image content, cropping the downscaled equirectangular image content resulting in a cropped equirectangular image content, and providing, over the communication network, the cropped equirectangular image content to the mobile device. Other embodiments are disclosed.
Image processing for 360-degree camera
In some implementations, a 360-degree camera includes two wide-angle lenses that provide a spherical view of a scene. The 360-degree camera is configured to be connected to a computing device for rendering the captured images. The user interface provides for the ability to present several camera views simultaneously, where each camera view is operated independently from the other camera views, such that the view may be changed during display by the user by changing the orientation. A plurality of rendering processes is executed in parallel to provide the data for each of the views and the output from each process is combined for presentation on the display. Additionally, a plurality of different combinations of multiple view layouts are provided, such as a circular view inside a rectangular view, a view that splits the display into equal separate views, and three or four independent camera views displayed simultaneously.
System and method for fisheye image processing
A system and method for fisheye image processing is disclosed. A particular embodiment can be configured to: receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame; partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame; warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view; combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame; generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image data set; and provide the combined resulting fisheye image data set as an output for other autonomous vehicle subsystems.
Method and device for performing mapping on spherical panoramic image
Disclosed are a method and device for image pasting on a spherical panoramic image. The method may comprise: establishing a spherical coordinate system for a first spherical panoramic image; mapping the first spherical panoramic image on a spherical surface to obtain a spherical projection image of the first spherical panoramic image; determining a first image pasting area in the spherical projection image according to the selection of the user and transforming the corresponding image into a plane image; transforming the image to be pasted into a shape of the plane image; mapping the transformed image to be pasted to the spherical coordinate system; rotating the image to be pasted that is mapped to the spherical coordinate system to a position at which the first image pasting area overlaps with the image; transforming the image to be pasted after being transformed into a second spherical panoramic image; determining a second image pasting area corresponding to the first image pasting area; and interpolating the second spherical panoramic image into the second image pasting area to complete the pasting. Distortionless image pasting on a spherical panoramic image may be realized.
Method of acquiring optimized spherical image using multiple cameras
Provided is a method of acquiring an optimized closed curved surface image using multiple cameras, and more particularly, an image providing method that produces a single image using a plurality of cameras fixed to a rig and having different capturing viewpoints, the method including projection operation of acquiring a single image by projecting images acquired from the plurality of cameras on a projection surface of a closed curved surface based on parameter information, rendering operation of fitting the acquired single image in a quadrangular frame for each region through a non-uniform sampling process, and viewing operation of mapping the sampled image to a viewing sphere.
METHOD, APPARATUS, AND RECORDING MEDIUM FOR PROCESSING IMAGE
A method of processing an image by a device obtaining one or more images including captured images of objects in a target space, generating metadata including information about mapping between the one or more images and a three-dimensional (3D) mesh model used to generate a virtual reality (VR) image of the target space, and transmitting the one or more images and the metadata to a terminal.
IMAGE PROCESSING APPARATUS AND METHOD
An image processing apparatus includes a conversion unit, a generation unit, and a processing unit. The conversion unit is configured to convert an original image received from a plurality of cameras into a top view image. The generation unit is configured to generate a surround view monitor (SVM) image by synthesizing a bottom image extracted from the top view image and a wall image extracted from the original image. The processing unit is configured to generate a modified SVM image by adjusting an area of the bottom image and an area of the wall image according to an adjustment condition, and to process the generated image into a display signal.
CAPTURING AND TRANSFORMING WIDE-ANGLE VIDEO INFORMATION
Systems and methods for capturing and transforming wide-angle video information are disclosed. Exemplary implementations may: guide light to an image sensor by a fisheye lens; capture wide-angled video information having a horizontal angle-of-view of 200 degrees or more and a vertical angle-of-view of 180 degrees or more; select a portion of the captured wide-angled video information; transform the selected portion into a rectilinear projection that represents the video sequence; and transform the rectilinear projection into a viewable video sequence that has a format suitable for playback in a virtual reality headset.
CALIBRATING CAMERAS AND COMPUTING POINT PROJECTIONS USING NON-CENTRAL CAMERA MODEL INVOLVING AXIAL VIEWPOINT SHIFT
An example system for identification of three-dimensional points includes a receiver to receive coordinates of a two-dimensional point in an image, and a set of calibration parameters. The system also includes a 2D-to-3D point identifier to identify a three-dimensional point in a scene corresponding to the 2D point using the calibration parameters and a non-central camera model including an axial viewpoint shift function comprising a function of a radius of a projected point in an ideal image plane.