G06T3/16

TRANSMISSION APPARATUS, TRANSMISSION METHOD, RECEPTION APPARATUS, AND RECEPTION METHOD
20200294188 · 2020-09-17 · ·

Improvement of the display performance in VR reproduction is achieved.

Encoded streams corresponding to respective divided regions (partitions) of a wide viewing angle image are transmitted together with information of the number of pixels and a frame rate of each divided region. On the reception side, the number of divided regions to be decoded corresponding to a display region can be easily set to a decodable maximum on the basis of the decoding capacity and the information of the number of pixels and the frame rate of each divided region of the wide viewing angle image. Therefore, the frequency of switching of the encoded stream with a movement of the display region can be reduced as far as possible, and improvement of the display performance in VR reproduction can be made.

PRECISE 360-DEGREE IMAGE PRODUCING METHOD AND APPARATUS USING ACTUAL DEPTH INFORMATION

The 360-degree image producing method according to an embodiment of the present invention includes: an information receiving step of receiving 360-degree image producing information including a plurality of camera images, pose information, position information, depth information, a camera model, and a 360-degree model; a target selecting step of selecting a depth information point corresponding to a target pixel included in the 360-degree image among a plurality of points included in the depth information, using the position information, the 360-degree model and the depth information; an image pixel value acquiring step of acquiring a pixel value of a pixel of a camera image corresponding to the depth information point among the plurality of camera images; and a target pixel constructing step of constructing a pixel value of the target pixel.

System and method for multimodal, motion-aware radar imaging

A radar imaging system to reconstruct a radar reflectivity image of a scene including an object moving with the scene, includes an optical sensor to track the object over a period of time including multiple time steps to produce, for each of the multiple time steps, a deformation of a nominal shape of the object, and an electromagnetic sensor to acquire snapshots of the scene over the multiple time steps to produce a set of radar reflectivity image of the object with deformed shapes defined by the corresponding deformations of the nominal shape of the object. The system also includes a processor configured to determine, for each of the multiple time steps using the deformation determined for the corresponding time step, a transformation between the radar reflectivity image of the object acquired by the electromagnetic sensor at the corresponding time step and a radar reflectivity image of the object in the prototypical pose, and to combine the radar reflectivity images of the object with deformed shapes transformed with the corresponding transformations to produce the radar reflectivity image of the object in the prototypical pose.

Video data processing method and apparatus

Example video data processing methods and apparatus are disclosed. One example method includes obtaining viewport information by a server. The server obtains spatial object information based on the viewport information, where the spatial object information is used to describe a specified spatial object in panoramic space. The server obtains a first bitstream that is obtained by encoding image data in the specified spatial object. The server obtains a second bitstream that is obtained by encoding image data in the panoramic space. The server transmits the first bitstream and the second bitstream to a client.

Method, apparatus and electronic device for displaying an image and storage medium

Embodiments of the present application provide a method, apparatus, electronic device for displaying an image and a storage medium. The method and apparatus are applied to an electronic device. The method comprises: determining a display angle of a 3D wallpaper containing elements, wherein the 3D wallpaper is obtained by pasting an overall spherical panoramic image containing all the elements onto one 3D model; determining a graphic to be displayed corresponding to the display angle in the overall spherical panoramic image; and rendering the graphic to be displayed and displaying the rendered graphic. In the embodiments, when the graphic to be displayed corresponding to the display angle of the 3D wallpaper is rendered, the operation on unnecessary occluded parts is avoided, thereby reducing the amount of computation in displaying the 3D wallpaper and saving the computing resources.

Method and apparatus for managing immersive data

Provided are a method and an apparatus for managing immersive data in an immersive system. The method includes: generating a truncated three-dimensional (3D) geometry including a truncated plane corresponding to a field of view (FOV) of a user, obtaining the immersive data comprising a plurality of frames based on the FOV of the user, mapping a frame from among the plurality of frames onto the truncated plane formed according to the FOV of the user, and projecting the frame onto the FOV of the user.

Dynamic split screen
10740957 · 2020-08-11 · ·

A set of images is mapped to a three-dimensional shape. A two-dimensional image that includes at least two portions is generated based on the three-dimensional shape, and within at least one of the at least two portions is rendered a transformation of at least a subset of the set of images. The two-dimensional image is provided to a display device to cause the display device to simultaneously display at least the two portions on a display surface.

IMAGE DISTANCE CALCULATOR AND COMPUTER-READABLE, NON-TRANSITORY STORAGE MEDIUM STORING IMAGE DISTANCE CALCULATION PROGRAM

In an image distance calculator (100), a CPU (104) extracts a frame image from moving images of an object captured by a camera, generates a slice image on the basis of a temporal change in a pixel line on a y-axis at a point x0 in the frame image, calculates a spotting point on the basis of correspondences between pixels in the slice image and pixels in the frame image, obtains pixels in the frame image corresponding to pixels in the slice image by a back-trace process, segments the frame image and slice image into regions, determines a corresponding region corresponding to a segmented region of the slice image, calculates a ratio value from an average q of the numbers of pixels in the corresponding region in the frame image and an average p of the numbers of pixels in the segmented region of the slice image, and calculates the distance z from the camera to the object for each corresponding region using a predetermined distance function.

DEVICE FOR DISPLAYING CONTENT AND OPERATION METHOD THEREFOR

Various embodiments of the present invention relate to a device for displaying content and an operation method therefor. An electronic device of the present invention comprises a display device, at least one processor, and a memory coupled to the processor, wherein, the memory may store instructions that, when executed, cause the processor to obtain at least one deformed image corresponding to a viewpoint change on the basis of an image contained in content in response to detection of a thumbnail generation event for the content, map the obtained at least one deformed image to a three-dimensional object, obtain at least one partial image comprising at least a portion of the image mapped to the three-dimensional object, generate thumbnail content comprising the obtained at least one partial image, and control the display device to display the thumbnail content. Other embodiments may be possible.

System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities

A system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency is disclosed. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site at multiple depths. A user may thereby view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as the user performs a medical procedure.