Patent classifications
H04N19/27
Embedding Animation in Electronic Mail, Text Messages and Websites
Provided are techniques for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera. The first photograph is called the key photo/. Using a graphics program, photos subsequent to the key photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the key photo as layers. The modified key photo, including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the key photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible.
TECHNIQUE FOR RECORDING AUGMENTED REALITY DATA
Disclosed is an improved approach for generated recordings from augmented reality systems from the perspective of a camera within the system. Instead of re-using rendered virtual content from the perspective of the user's eyes for AR recordings, additional virtual content is rendered from an additional perspective specifically for the AR recording. That additional virtual content is combined with image frames generated by a camera to form the AR recording.
Image compression optimization
Particular embodiments may access one or more images configured to be used for generating an artificial reality (AR) effect. For each image, one or more compressed images may be generated using different compression settings, respectively. For each compressed image, a quality score may be computed based on that compressed image and the associated image from which the compressed image is generated. For each image, a desired quality threshold may be determined, and an optimal compression setting for that image may be determined based on the desired quality threshold and quality scores associated with the one or more compressed images generated from that image, wherein the optimal compression setting corresponds to one of the plurality of different compression settings. Each of the one or more images may be compressed using the associated optimal compression setting to generate and output one or more optimally-compressed images.
Encoding device and encoding method
An encoding device includes a memory, and a processor coupled to the memory and the processor configured to extract a first line-drawing region from a first image in a plurality of images, by replacing the first line-drawing region of the first image with an image which is included in a second image preceding the first image in the plurality of images and corresponds to the first line-drawing region, generate a third image, generate video encoding information by performing a video encoding processing based on the third image, generate line-drawing encoding information by performing a line-drawing encoding processing based on the first line-drawing region, and transmit the video encoding information and the line-drawing encoding information to a decoding device.
SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR GENERATING REMOTE VIEWS IN A VIRTUAL MOBILE DEVICE PLATFORM USING EFFICIENT COLOR SPACE CONVERSION AND FRAME ENCODING
Embodiments disclosed herein provide systems, methods and computer readable media for generating remote views in a virtual mobile device platform. A virtual mobile device platform may be coupled to a physical mobile device over a network and generate frames of data for generating views on the physical device. These frames can be generated using an efficient display encoding pipeline on the virtual mobile device platform. Such efficiencies may include, for example, the synchronization of various processes or operations, the governing of various processing rates, the elimination of duplicative or redundant processing, the application of different encoding schemes, the efficient detection of duplicative or redundant data or the combination of certain operations.
Mixed reality coding with overlays
A system includes a camera to capture real world content and a semiconductor package apparatus. The semiconductor package apparatus includes a substrate and logic. The logic includes a graphics pipeline to generate rendered content, a base layer encoder to encode real world content into a base layer and a first layer encoder to encode rendered content into a first non-base layer, a multiplexer to interleave the base layer with the first non-base layer to obtain a single output signal having mixed reality content, and a transmitter to transmit the single output signal. The system further includes a second layer encoder to encode map data into a second non-base layer. The multiplexer to interleave the second non-base layer with the first non-base layer and the base layer. The first and second layer encoders encode the rendered content and the map data into overlay auxiliary pictures.
Mixed reality coding with overlays
A system includes a camera to capture real world content and a semiconductor package apparatus. The semiconductor package apparatus includes a substrate and logic. The logic includes a graphics pipeline to generate rendered content, a base layer encoder to encode real world content into a base layer and a first layer encoder to encode rendered content into a first non-base layer, a multiplexer to interleave the base layer with the first non-base layer to obtain a single output signal having mixed reality content, and a transmitter to transmit the single output signal. The system further includes a second layer encoder to encode map data into a second non-base layer. The multiplexer to interleave the second non-base layer with the first non-base layer and the base layer. The first and second layer encoders encode the rendered content and the map data into overlay auxiliary pictures.
Technique for recording augmented reality data
Disclosed is an improved approach for generated recordings from augmented reality systems from the perspective of a camera within the system. Instead of re-using rendered virtual content from the perspective of the user's eyes for AR recordings, additional virtual content is rendered from an additional perspective specifically for the AR recording. That additional virtual content is combined with image frames generated by a camera to form the AR recording.
VIDEO COMPRESSION FOR VIDEO GAMES
A video compression system and method may be used to compress video data using both resolution compression and texture compression. The compression may involve converting the video format from a first format to a second format and then performing resolution compression across blocks of pixels within each frame of the video. The resolution compressed data may then be arranged as data triplets spanning three consecutive frames of the video. The data triplets may be texture compressed using ETC or other texture compression techniques. The compressed video may be part of other applications, such as a video to be played within a video game. A client device may be able to decompress the compressed video by reversing the texture compression, reversing the resolution compression, and performing a format conversion to generate uncompressed video data that can be used to play the video.
Fast and accurate block matching for computer generated content
A set of software applications configured to perform interframe and/or intraframe encoding operations based on data communicated between a graphics application and a graphics processor. The graphics application transmits a 3D model to the graphics processor to be rendered into a 2D frame of video data. The graphics application also transmits graphics commands to the graphics processor indicating specific transformations to be applied to the 3D model as well as textures that should be mapped onto portions of the 3D model. Based on these transformations, an interframe module can determine blocks of pixels that repeat across sequential frames. Based on the mapped textures, an intraframe module can determine blocks of pixels that repeat within an individual frame. A codec encodes the frames of video data into compressed form based on blocks of pixels that repeat across frames or within frames.