Patent classifications
H04N21/42202
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
METHOD AND DEVICE FOR LATENCY REDUCTION OF AN IMAGE PROCESSING PIPELINE
In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
Set-top box with enhanced controls
A set-top box with enhanced content and system and method for use of the same are disclosed. In one embodiment, a wireless transceiver is located within a housing, which also interconnectively includes a television input, television output, a processor, and memory. The set-top box may establish a pairing with a proximate wireless-enabled interactive programmable device having a display. Content, such music, for example, may be imported from the proximate wireless-enabled interactive programmable device and provided to the television. While the music is playing, the set-top box may generate and provide to the television a control signal that includes instructions to adjust the brightness of the television by dimming the television.
Tracking User Interaction from a Receiving Device
Measuring and tracking user interaction with a television receiver, such as a set top box or cable box. The television receiver may create and display a matrix code that includes temporal information, user identification information, geographic information, and/or a user selection. The matrix code may be captured by a matrix reading device and transmitted to a monitoring entity. Optionally, the matrix reading device may decode the matrix code and transmit associated data to the monitoring entity. The monitoring entity may use the code or data to track and distinguish between user interactions at different points in time.
CLOUD BASED VISION
A method for receiving a real-time video feed of a region of interest includes generating, at a processor of a first device, a request for a real-time video stream of the region of interest. The request indicates a location of the region of interest. The method also includes transmitting the request to one or more other devices via a network to query whether another device is situated to capture a portion of the region of interest. The method also includes receiving the real-time video stream of the region of interest from a second device of the one or more other devices. The second device includes a camera having a field of view that includes at least a portion of the region of interest.
DISPLAY APPARATUS, DISPLAY METHOD, AND COMPUTER PROGRAM
Videos are to be displayed in parallel, without missing information or a decrease in efficiency in the usable display region.
The aspect ratio of the large screen of an information processing apparatus 100 is 16:9, which is compatible with a Hi-Vision video. In a case where the large screen is used in a portrait layout, if the large screen is divided into three small screens in the vertical direction, the aspect ratio of the small screens after the dividing is 9:16/3=16:9.48. With respect to the original video content at 16:9, the ratio in inches is 9/16=56.25% (the area ratio is (9/16).sup.2=31.64%). Accordingly, the usable display region can be efficiently used.
Image-based techniques for stabilizing positioning estimates
A device implementing a system for estimating device location includes at least one processor configured to receive a first estimated position of the device at a first time. The at least one processor is further configured to capture, using an image sensor of the device, images during a time period defined by the first time and a second time, and determine, based on the images, a second estimated position of the device, the second estimated position being relative to the first estimated position. The at least one processor is further configured to receive a third estimated position of the device at the second time, and estimate a location of the device based on the second estimated position and the third estimated position.
CREATIVE INTENT SCALABILITY VIA PHYSIOLOGICAL MONITORING
Creative intent input describing emotion expectations and narrative information relating to media content is received. Expected physiologically observable states relating to the media content are generated based on the creative intent input. An audiovisual content signal with the media content and media metadata comprising the physiologically observable states is provided to a playback apparatus. The audiovisual content signal causes the playback device to use physiological monitoring signals to determine, with respect to a viewer, assessed physiologically observable states relating to the media content and generate, based on the expected physiologically observable states and the assessed physiologically observable states, modified media content to be rendered to the viewer.
Method and apparatus for transmitting video content using edge computing service
An example method, performed by an edge data network, of transmitting video content includes: obtaining first bearing information from an electronic device connected to the edge data network; determining second predicted bearing information based on the first bearing information; determining a second predicted partial image corresponding to the second predicted bearing information; transmitting, to the electronic device, a second predicted frame generated by encoding the second predicted partial image; obtaining, from the electronic device, second bearing information corresponding to a second partial image; comparing the second predicted bearing information to the obtained second bearing information; generating, based on a result of the comparing, a compensation frame using at least two of a first partial image corresponding to the first bearing information, the second predicted partial image, or the second partial image corresponding to the second bearing information; and transmitting the generated compensation frame to the electronic device based on the result of the comparing.
Predictive Power Management Using Machine Learning
Systems and methods are described for power control in a media receiver device having a plurality of electronic components, using a trained model. Input signals are received from at least one electronic component. An alertness state of the device is determined using a machine learning based determiner trained to process the received input signals and predict a subsequent alertness state identifying at least one additional component or device. Power consumption by the identified components and devices is controlled based on the predicted alertness state.