Patent classifications
H04N13/279
STEREOSCOPIC-IMAGE PLAYBACK DEVICE AND METHOD FOR GENERATING STEREOSCOPIC IMAGES
A method for generating stereoscopic images is provided. The method includes: creating a three-dimensional mesh to obtain a stereoscopic scene and capturing a two-dimensional image of the stereoscopic scene; performing image preprocessing to obtain a first image in response to the two-dimensional image not being a side-by-side image; utilizing a graphics processing pipeline to perform depth estimation on the first image to obtain a depth image, to update the three-dimensional mesh according to a depth setting of the depth image, and to map the three-dimensional mesh to a corresponding coordinate system; utilizing the graphics processing pipeline to project the first image onto the mapped three-dimensional mesh to obtain an output three-dimensional mesh, and to capture an output side-by-side image from the output three-dimensional mesh; and utilizing the graphics processing pipeline to weave a left-eye and right-eye image into an output image, and to display the output image.
Encoding apparatus and encoding method, decoding apparatus and decoding method
There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.
Encoding apparatus and encoding method, decoding apparatus and decoding method
There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.
Pet treat
A composition and process for making pet food treats is described herein. Auxiliary ingredients are combined to form a meat mixture. The meat mixture is formed into portions. The portions of meat mixture are positioned on a chew stick that comprises rawhide. The pet treat gives the appearance of a grilled shish kabob, where the meat portions are meant for initial taste, while the chew stick will provide the dog with a longer-lasting chewing portion.
Pet treat
A composition and process for making pet food treats is described herein. Auxiliary ingredients are combined to form a meat mixture. The meat mixture is formed into portions. The portions of meat mixture are positioned on a chew stick that comprises rawhide. The pet treat gives the appearance of a grilled shish kabob, where the meat portions are meant for initial taste, while the chew stick will provide the dog with a longer-lasting chewing portion.
IMMERSIVE DISPLAYS
A method of displaying images on an immersive display. The method includes receiving information from an external sensor or input device of the immersive display, based on the information received, detecting an object that conflicts with a virtual reality space, adjusting at least one dimension of virtual reality space to provide an adjusted virtual reality for display on the immersive display to accommodate for the object, and displaying the adjusted virtual reality on the display of the immersive display.
VIDEO PROCESSING DEVICE AND MANIFEST FILE FOR VIDEO STREAMING
One aspect of this disclosure relates a video processing device comprising a processor for processing a manifest file for video streaming for a user. The manifest file comprises at least a plurality of positions defined for a scene that are associated with pre-rendered omnidirectional or volumetric video segments stored on a server system. The manifest file may also contain a plurality of resource locators for retrieving omnidirectional or volumetric video segments from the server system. Each resource locator may be associated with a position defined for the scene. The video processing device may be configured to associate a position of the user with a first position for the scene in the manifest file to retrieve a first omnidirectional or volumetric video segment associated with the first position using a first resource locator from the manifest file.
VEHICLE MIRROR IMAGE SIMULATION
A method of providing image includes obtaining at least one first image of a surrounding area (52) from a first camera (26, 33, 38A, 38B, 40A, and 40B). At least one second image of the surrounding area (52) is obtained from a second camera (26, 33, 38A, 38B, 40A, and 40B). The at least one first image is fused with the at least one second image to generate a three-dimensional model (51) of the surrounding area (52). A first image (54A) of the three dimensional model is provided to a display by determining a first position of an operator. A second image (54B) of the three-dimensional model is provided to the display by determining when the operator is in a second position to simulate motion parallax.
SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.
SYSTEM AND METHOD FOR DETERMINING DIRECTIONALITY OF IMAGERY USING HEAD TRACKING
There is provided a system and method for reinstating directionality of onscreen displays of three-dimensional (3D) imagery using sensor data capturing eye location of a user. The method can include: receiving the sensor data capturing the eye location of the user; tracking the location of the eyes of the user relative to a screen using the captured sensor data; determining an updated rendering of the onscreen imagery using off-axis projective geometry based on the tracked location of the eyes of the user to simulate an angled viewpoint of the onscreen imagery from the perspective of the location of the user; and outputting the updated rendering of the onscreen imagery on a display screen.