H04N2013/0085

Multiplexed mutli-view scanning aerial cameras
11831856 · 2023-11-28 · ·

A scanning camera for capturing images along two or more curved scan paths, the scanning camera comprising a camera assembly associated with each scan path, each camera assembly comprising an image sensor and a lens; a scanning mirror; and a drive coupled to the scanning mirror; wherein the drive is operative to rotate the scanning mirror about a spin axis according to a spin angle; the spin axis is tilted relative to each camera optical axis; the scanning mirror is tilted relative to the spin axis and each camera optical axis; the scanning mirror is positioned to reflect an imaging beam into each lens in turn; and each image sensor is operative to capture each image along a respective one of the scan paths by sampling the imaging beam at a corresponding spin angle.

Method and apparatus of encoding/decoding image data based on tree structure-based block division

Disclosed are methods and apparatuses for image data encoding/decoding. A method of decoding an image includes receiving a bitstream in which the image is encoded; obtaining index information for specifying a block division type of a current block in the image; and determining the block division type of the current block from a candidate group pre-defined in the decoding apparatus. The candidate group includes a plurality of candidate division types, including at least one of a non-division, a first quad-division, a second quad-division, a binary-division or a triple-division. The method also includes dividing the current block into a plurality of sub-blocks; and decoding each of the sub-blocks with reference to syntax information obtained from the bitstream.

Method and apparatus of encoding/decoding image data based on tree structure-based block division

Disclosed are methods and apparatuses for image data encoding/decoding. A method of decoding an image includes receiving a bitstream in which the image is encoded; obtaining index information for specifying a block division type of a current block in the image; and determining the block division type of the current block from a candidate group pre-defined in the decoding apparatus. The candidate group includes a plurality of candidate division types, including at least one of a non-division, a first quad-division, a second quad-division, a binary-division or a triple-division. The method also includes dividing the current block into a plurality of sub-blocks; and decoding each of the sub-blocks with reference to syntax information obtained from the bitstream.

METHOD AND APPARATUS OF ENCODING/DECODING IMAGE DATA BASED ON TREE STRUCTURE-BASED BLOCK DIVISION
20230224497 · 2023-07-13 · ·

Disclosed are methods and apparatuses for image data encoding/decoding. A method of decoding an image includes receiving a bitstream in which the image is encoded; obtaining index information for specifying a block division type of a current block in the image; and determining the block division type of the current block from a candidate group pre-defined in the decoding apparatus. The candidate group includes a plurality of candidate division types, including at least one of a non-division, a first quad-division, a second quad-division, a binary-division or a triple-division. The method also includes dividing the current block into a plurality of sub-blocks; and decoding each of the sub-blocks with reference to syntax information obtained from the bitstream.

Virtual reality environment based manipulation of multi-layered multi-view interactive digital media representations

Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.

POSITION ESTIMATING METHOD, POSITION ESTIMATING SYSTEM, AND POSITION ESTIMATING APPARATUS
20220284599 · 2022-09-08 · ·

In order to appropriately estimated a position of a moving object on the basis of a captured image, a position estimating apparatus 100 includes an obtaining section 110 configured to obtain an image coordinates representing an image position of a moving object 300 in an image captured by an imaging apparatus 200, and a converting section 130 configured to convert the image coordinates for the moving object 300 to real space coordinates for the moving object 300, based on information obtained from a correspondence between real space coordinate information indicating a predetermined point in a real space and image coordinate information representing the predetermined point.

Method and apparatus for buffer management in cloud based virtual reality services

Provided is a method for creating a virtual reality content, storing the virtual reality content in a transmission buffer, and after that, managing the transmission buffer. A server creates the virtual reality content based on user's motion information, stores the virtual reality content in the transmission buffer and is allowed to modify the virtual reality content stored in the transmission buffer based on subsequently received user's motion information, so that the most recent user's motion information can be appropriately reflected in the virtual reality content. It is possible to provide a more immersive virtual reality service.

Glasses-free determination of absolute motion

During operation, an electronic device may capture images using multiple image sensors having different fields of view and positions. Then, the electronic device may determine, based at least in part on an apparent size of an anatomical feature in the images (such as an interpupillary distance) and a predefined or predetermined size of the anatomical feature, absolute motion of at least a portion of the individual along a direction between at least the portion of the individual and the electronic device. Moreover, the electronic device may compute based at least in part on an estimated distance along the direction corresponding to the apparent size and the predefined or predetermined size and angular information associated with one or more objects in the images relative to the positions, absolute motion of at least the portion of the individual in a plane that is perpendicular to the direction.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, REPRODUCTION PROCESSING DEVICE, AND REPRODUCTION PROCESSING METHOD

An atlas processing unit generates atlas identification information associating a texture image that forms a reference two-dimensional image corresponding to each projection direction formed by projecting three-dimensional data from a predetermined viewpoint position in a plurality of projection directions and a complementary image for generating, from the reference two-dimensional image, a moved two-dimensional image based on a viewpoint position moved within a limited range from the predetermined viewpoint position, with a depth image corresponding to the texture image, and each piece of post decoding information for rendering each reference two-dimensional image and each moved two-dimensional image, the post decoding information including first post decoding information indicating that the first post decoding information is information of a “3DoF+” region in which the complementary image in the texture image is stored. An encoding unit encodes the texture image and the depth image to generate a texture layer and a depth layer.

Navigated surgical system with eye to XR headset display calibration
11382713 · 2022-07-12 · ·

A camera tracking system for computer assisted navigation during surgery operatively determines a first pose of a second extended-reality (XR) headset relative to stereo tracking cameras located on a first XR headset based on first tracking information from the stereo tracking cameras. The camera tracking system determines a second pose of eyes of a user wearing the second XR headset relative to the stereo tracking cameras located on the first XR headset based on second tracking information from the stereo tracking cameras. The camera tracking system also calibrates an eye-to-display relationship defining pose of the eyes of the user wearing the second XR headset to a display device of the second XR headset based on the determined first and second poses. The camera tracking system also controls where symbols are displayed on the display device of the second XR headset based on the eye-to-display relationship.