G06T3/0087

AUTOMATED PANORAMIC IMAGE CONNECTION FOR AUTOPLAY WITH TRANSITIONS

Connecting images to generate a virtual tour is provided. The system receives images and metadata captured by a first client device, and detects features of the images to establish an order. The system identifies, based at least in part on the images and the metadata, a position for a virtual camera. The system connects, in the order, the images with the position of the virtual camera persisting across the images. The system generates a virtual tour from the connected images with a linear path along the persistent position of the virtual camera based on a configuration file comprising a constraint configured to disable branching along the linear path. The system delivers a viewer application that executes in a client application on a second client device. The system streams, to the viewer application, the virtual tour based at least in part on the configuration file.

Encoding method, playing method and apparatus for image stabilization of panoramic video, and method for evaluating image stabilization algorithm
11627390 · 2023-04-11 · ·

An encoding method, playing method and apparatus for image stabilization of panoramic video, and a method for evaluating image stabilization algorithm are provided. The image stabilization method for the panoramic video is applicable to an electronic apparatus including a processor. In the method, a plurality of image frames of a panoramic video is captured, and each image frame is transformed into a plurality of projection frames on a plurality of faces of a cubemap. Then, variations of triaxial displacements and attitude angles between the projection frames transformed onto each of the faces and adjacent in time are calculated. The variations of triaxial displacements and attitude angles are smoothed and recorded as movement information. While playing the panoramic video, the panoramic video is corrected by the movement information and played. Thus, it is possible to reduce the amount of calculation required for the stabilization calculations on the captured video.

Method for processing immersive video and method for producing immersive video

A method for processing an immersive video includes: performing pruning for view images; generating an atlas by packing a patch that is extracted as a result of the pruning; deriving an offset for the patch that is comprised in the atlas; and correcting pixel values in the patch by using the derived offset.

Systems and Methods to Perform 3D Localization of Target Objects in Point Cloud Data Using A Corresponding 2D Image

The present invention relates to a systems and methods to perform 3D localization of target objects in point cloud data using a corresponding 2D image. According to an illustrative embodiment of the present disclosure, a target environment is imaged with a camera to generate a 2D panorama and a scanner to generate a 3D point cloud. The 2D panorama is mapped to the point cloud with a 1 to 1 grid map. The target objects are detected and localized in 2D before being mapped back to the 3D point cloud.

HYBRID CUBEMAP PROJECTION FOR 360-DEGREE VIDEO CODING
20230199219 · 2023-06-22 · ·

A system, method, and/or instrumentality may be provided for coding a 360-degree video. A picture of the 360-degree video may be received. The picture may include one or more faces associated with one or more projection formats. A first projection format indication may be received that indicates a first projection format may be associated with a first face. A second projection format indication may be received that indicates a second projection format may be associated with a second face. Based on the first projection format, a first transform function associated with the first face may be determined. Based on the second projection format, a second transform function associated with the second face may be determined. At least one decoding process may be performed on the first face using the first transform function and/or at least one decoding process may be performed on the second face using the second transform function.

Motion-based content navigation
09848130 · 2017-12-19 · ·

A set of sequential images are accessed. Measures of background stability across a set of images are determined, and the set of images is stabilized based on the determined measures. The images are cropped and stored in sequential order. A first image from the set of cropped images is displayed, and data indicating a change in orientation is received. Responsive to a determination that the change in orientation is associated with forward progress, an image after the first image in the set of sequential images is displayed. Responsive to a determination that the change in orientation is associated with backward progress, an image before the first image in the set of sequential images is displayed. The set of images can include a selected face of an individual and can be ordered chronologically, allowing a user to view older and younger images of the individual when navigating the set of images.

EXTENDED VIEWPORT USING MOBILE DISPLAY AGGREGATION TO ACCESS EXTRA MEDIA CONTENT

Aspects of the subject disclosure may include, for example, identifying a media content item having first and second portions adapted for presentation according to first and second viewports that facilitate access to an extended portion of the media content item otherwise inaccessible via the first viewport. First and second display devices are associated together, configuration parameters are determined, and a viewport configuration is identified for the first and second viewports according to the configuration parameters. The first and second portions of the media content items are received by the first and second display devices to facilitate first and second presentations of the first and second portions of the media content item. The first and second presentations, according to the viewport configuration, provide a collective display of the extended portion of the media content item. Other embodiments are disclosed.

METHOD, APPARATUS AND STREAM OF FORMATTING AN IMMERSIVE VIDEO FOR LEGACY AND IMMERSIVE RENDERING DEVICES
20170339440 · 2017-11-23 ·

The present disclosure relates to methods, apparatus or systems for generating, transmitting and decoding a backward compatible immersive video stream. The stream is carrying data representative of an immersive video, composed of a frame organized according to a layout having a first area encoded according to a rectangle mapping, a second area encoded according to a mapping transitory from the rectangular mapping to an immersive mapping and a third area encoded according to the immersive mapping. In order to be backward compatible, the stream further includes first information representative of the size and the location of the first area within the video frame, and second information having at least the type of the selected layout, the field of view of first part, of the size of the second area within the video frame and a reference direction.

METHOD AND APPARATUS OF ENCODING/DECODING IMAGE DATA BASED ON TREE STRUCTURE-BASED BLOCK DIVISION
20230171428 · 2023-06-01 · ·

Disclosed are methods and apparatuses for image data encoding/decoding. A method of decoding an image includes receiving a bitstream in which the image is encoded; obtaining index information for specifying a block division type of a current block in the image; and determining the block division type of the current block from a candidate group pre-defined in the decoding apparatus. The candidate group includes a plurality of candidate division types, including at least one of a non-division, a first quad-division, a second quad-division, a binary-division or a triple-division. The method also includes dividing the current block into a plurality of sub-blocks; and decoding each of the sub-blocks with reference to syntax information obtained from the bitstream.

Video processing method for remapping sample locations in projection-based frame with hemisphere cubemap projection layout to locations on sphere and associated video processing apparatus
11263722 · 2022-03-01 · ·

A video processing method includes: decoding a part of a bitstream to generate a decoded frame, where the decoded frame is a projection-based frame that includes projection faces in a hemisphere cubemap projection layout; and remapping sample locations of the projection-based frame to locations on the sphere, where a sample location within the projection-based frame is converted into a local sample location within a projection face packed in the projection-based frame; in response to adjustment criteria being met, an adjusted local sample location within the projection face is generated by applying adjustment to one coordinate value of the local sample location within the projection face, and the adjusted local sample location within the projection face is remapped to a location on the sphere; and in response to the adjustment criteria not being met, the local sample location within the projection face is remapped to a location on the sphere.