Patent classifications
H04N13/117
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Pet treat
A composition and process for making pet food treats is described herein. Auxiliary ingredients are combined to form a meat mixture. The meat mixture is formed into portions. The portions of meat mixture are positioned on a chew stick that comprises rawhide. The pet treat gives the appearance of a grilled shish kabob, where the meat portions are meant for initial taste, while the chew stick will provide the dog with a longer-lasting chewing portion.
Pet treat
A composition and process for making pet food treats is described herein. Auxiliary ingredients are combined to form a meat mixture. The meat mixture is formed into portions. The portions of meat mixture are positioned on a chew stick that comprises rawhide. The pet treat gives the appearance of a grilled shish kabob, where the meat portions are meant for initial taste, while the chew stick will provide the dog with a longer-lasting chewing portion.
VIDEO PROCESSING DEVICE AND MANIFEST FILE FOR VIDEO STREAMING
One aspect of this disclosure relates a video processing device comprising a processor for processing a manifest file for video streaming for a user. The manifest file comprises at least a plurality of positions defined for a scene that are associated with pre-rendered omnidirectional or volumetric video segments stored on a server system. The manifest file may also contain a plurality of resource locators for retrieving omnidirectional or volumetric video segments from the server system. Each resource locator may be associated with a position defined for the scene. The video processing device may be configured to associate a position of the user with a first position for the scene in the manifest file to retrieve a first omnidirectional or volumetric video segment associated with the first position using a first resource locator from the manifest file.
VEHICLE MIRROR IMAGE SIMULATION
A method of providing image includes obtaining at least one first image of a surrounding area (52) from a first camera (26, 33, 38A, 38B, 40A, and 40B). At least one second image of the surrounding area (52) is obtained from a second camera (26, 33, 38A, 38B, 40A, and 40B). The at least one first image is fused with the at least one second image to generate a three-dimensional model (51) of the surrounding area (52). A first image (54A) of the three dimensional model is provided to a display by determining a first position of an operator. A second image (54B) of the three-dimensional model is provided to the display by determining when the operator is in a second position to simulate motion parallax.
DISPLAY DEVICE AND DISPLAY METHOD
In accordance with an embodiment, a display device includes a display unit performing a control for displaying a display image related to a predetermined object as a two-dimensional image in a virtual space; and a parameter acquisition unit acquiring a first parameter related to a viewpoint in the virtual space and a second parameter defining a change in the predetermined object, wherein the display unit changes an inclination of the two-dimensional image in the virtual space based on the first parameter and performs the control for displaying the display image related to the predetermined object based on the first parameter and the second parameter.
DISPLAY DEVICE AND DISPLAY METHOD
In accordance with an embodiment, a display device includes a display unit performing a control for displaying a display image related to a predetermined object as a two-dimensional image in a virtual space; and a parameter acquisition unit acquiring a first parameter related to a viewpoint in the virtual space and a second parameter defining a change in the predetermined object, wherein the display unit changes an inclination of the two-dimensional image in the virtual space based on the first parameter and performs the control for displaying the display image related to the predetermined object based on the first parameter and the second parameter.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
A setting reception unit obtains information identifying an object selected by a user from foreground objects as a target to be a part of the background. A backgrounded target determination unit identifies the model ID of the selected object based on the object identifying information obtained and three-dimensional shape data. Based on the three-dimensional shape data, the determination unit identifies a foreground ID corresponding to the identified model ID, in a captured image from an actual camera. The determination unit obtains coordinate information and mask information in foreground data corresponding to the foreground ID identified, generates a correction foreground mask, and sends the mask to a background correction unit in an image processing unit. The background correction unit generates a correction image by masking the captured image using the mask, superimposes the correction image onto the background image, and outputs it as a corrected background image.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
A setting reception unit obtains information identifying an object selected by a user from foreground objects as a target to be a part of the background. A backgrounded target determination unit identifies the model ID of the selected object based on the object identifying information obtained and three-dimensional shape data. Based on the three-dimensional shape data, the determination unit identifies a foreground ID corresponding to the identified model ID, in a captured image from an actual camera. The determination unit obtains coordinate information and mask information in foreground data corresponding to the foreground ID identified, generates a correction foreground mask, and sends the mask to a background correction unit in an image processing unit. The background correction unit generates a correction image by masking the captured image using the mask, superimposes the correction image onto the background image, and outputs it as a corrected background image.