H04N13/189

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

THREE-DIMENSIONAL SPACE CAMERA AND PHOTOGRAPHING METHOD THEREFOR

A 3D camera and a photographing method thereof are provided. The 3D camera includes an image 1 photographing unit and an image 2 photographing unit in the same optical system and a processing system that processes data of the image 1 photographing unit and the image 2 photographing unit. The processing system includes a control unit that controls image photographing by the image 1 photographing unit and the image 2 photographing unit, a recording and storage unit that performs processing on the control unit, and a 3D coordinate calculation unit that performs calculation on the recording and storage unit. 3D coordinates of an object point in 3D space or all space object points constituting a 3D object directly facing the camera are calculated by simultaneously photographing one or more pairs of images for the object point in the 3D space or all space object points constituting the 3D object.

THREE-DIMENSIONAL SPACE CAMERA AND PHOTOGRAPHING METHOD THEREFOR

A 3D camera and a photographing method thereof are provided. The 3D camera includes an image 1 photographing unit and an image 2 photographing unit in the same optical system and a processing system that processes data of the image 1 photographing unit and the image 2 photographing unit. The processing system includes a control unit that controls image photographing by the image 1 photographing unit and the image 2 photographing unit, a recording and storage unit that performs processing on the control unit, and a 3D coordinate calculation unit that performs calculation on the recording and storage unit. 3D coordinates of an object point in 3D space or all space object points constituting a 3D object directly facing the camera are calculated by simultaneously photographing one or more pairs of images for the object point in the 3D space or all space object points constituting the 3D object.

APPARATUS AND METHOD FOR PROVIDING SPLIT-RENDERED VIRTUAL REALITY IMAGE

An edge server for providing a virtual reality (VR) image is proposed. The server may include a rendering synchronization unit synchronizing a visual field and a margin with a virtual reality device. The server may also include a rendering unit generating a rendered image by rendering a visual field area corresponding to the visual field and a margin area corresponding to the margin based on a rotation center in an entire virtual reality image. The server may further include an encoding unit generating a reduced margin area by dividing a resolution of the margin area by a scaling factor, and encoding the visual field area and the reduced margin area to generate a split virtual reality image including the encoded visual field area and the encoded reduced margin area. The server may further include a streaming transmission unit transmitting the split virtual reality image to the virtual reality device.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

An information processing apparatus generates a virtual viewpoint video based on a virtual viewpoint by using a motion picture obtained by imaging an imaging region with a plurality of imaging devices. The information processing apparatus displays, on a display, a standard image corresponding to the imaging region, a plurality of virtual viewpoint paths disposed in the standard image and representing a trajectory of movement of the virtual viewpoint, an indicator indicating a reproduction position of the virtual viewpoint video, and a reference image based on a virtual viewpoint image viewed from the virtual viewpoint corresponding to the reproduction position of the virtual viewpoint path among a plurality of virtual viewpoint images configuring the virtual viewpoint video.

Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery

Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.

Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery

Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.

COMPOSITING NON-IMMERSIVE MEDIA CONTENT TO GENERATE AN ADAPTABLE IMMERSIVE CONTENT METAVERSE

In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.

COMPOSITING NON-IMMERSIVE MEDIA CONTENT TO GENERATE AN ADAPTABLE IMMERSIVE CONTENT METAVERSE

In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.