Patent classifications
H04N21/42201
CONTROLLING PROGRESS OF AUDIO-VIDEO CONTENT BASED ON SENSOR DATA OF MULTIPLE USERS, COMPOSITE NEURO-PHYSIOLOGICAL STATE AND/OR CONTENT ENGAGEMENT POWER
Provided is a system for controlling progress of audio-video content based on sensor data of multiple users, composite neuro-physiological state (CNS) and/or content engagement power (CEP). Sensor data is received from sensors positioned on an electronic device of a first user to sense neuro-physiological responses of the first user and second users that are in field-of-view (FOV) of the sensors. Based on the sensor data and at least one of a CNS value for social interaction application and a CEP value for immersive content, recommendations of action items for first user are predicted. Content of a feedback loop, created based on sensor data, CNS value, CEP value, and predicted recommendations, is rendered on output unit of electronic device during play of the at least one of social interaction application and immersive content experience. Progress of social interaction and immersive content experience is controlled by first user based on predicted recommendations.
METHOD AND DEVICE FOR LATENCY REDUCTION OF AN IMAGE PROCESSING PIPELINE
In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
Thumbnail Image Replacement
Methods for recognizing thumbnails may include operations of receiving an identification of a thumbnail source for content, receiving the thumbnail, computing a hash value for the thumbnail, and associating the hash value with the thumbnail. Operations for content characterization may include launching an image analysis application, selecting a top level category to apply to a thumbnail, providing the thumbnail to the image analysis application, applying the selected top level category to the thumbnail to determine if the thumbnail satisfies the top level category, if satisfied, associating the top level category with the thumbnail, and repeating the one or more of the above operations with respect to a second category. Operations may include receiving an identification of a node to receive a thumbnail, obtaining a node selected category, receiving a proposed thumbnail to provide to the node, and determining if the proposed thumbnail has been previously recognized and categorized.
METHODS AND APPARATUS TO DETERMINE USER PRESENCE
Methods and apparatus to determine user presence are disclosed. A disclosed example monitoring device to determine of a presence of a user in a metering environment includes a mount to couple the monitoring device to a wearable device to be worn by the user, the wearable device to receive content from a content device, a sensor to detect motion of the user, and a transmitter to transmit motion data pertaining to the detected motion of the user for the determination of the presence of the user.
Radio frequency sensing in a television environment
Techniques are provided for performing radio frequency (RF) sensing to determine the viewing status of a television user. This can be used to determine user behavior during the playback of content (e.g., whether a user is watching the content), which can be used as a data point for determining the user's level of interest in the content. Using the status of the television user, embodiments can provide additional or alternative functionality, such as powering down and/or powering up the television. Furthermore, RF sensing may be performed by existing television hardware, such a Wi-Fi transceiver, and may therefore provide RF sensing functionality to a television with little or no added cost.
Systems and methods for video delivery based upon saccadic eye motion
A method is provided for displaying an immersive video content according to eye movement of a viewer includes the steps of detecting, using an eye tracking device, a field of view of at least one eye of the viewer, transmitting eye tracking coordinates from the detected field of view to an eye tracking processor, identifying a region on a video display corresponding to the transmitted eye tracking processor, adapting the immersive video content from a video storage device at a first resolution for a first portion of the immersive video content and a second resolution for a second portion of the immersive video content, the first resolution being higher than the second resolution, displaying the first portion of the immersive video content on the video display within a zone, and displaying the second portion of the immersive video content on the video display outside of the zone.
UNOBTRUSIVELY ENHANCING VIDEO CONTENT WITH EXTRINSIC DATA
The playback of video content upon a display is enhanced through the unobtrusive presentation of extrinsic data upon the same display. A video content feature is rendered on a display. A quantity of extrinsic data relevant to a current time in the video content feature is determined based at least in part on viewing history of a user. A graphical element presenting at least a portion of the extrinsic data is rendered on the display while the video content feature is also being rendered on the display.
DISPLAY APPARATUS, DISPLAY METHOD, AND COMPUTER PROGRAM
Videos are to be displayed in parallel, without missing information or a decrease in efficiency in the usable display region.
The aspect ratio of the large screen of an information processing apparatus 100 is 16:9, which is compatible with a Hi-Vision video. In a case where the large screen is used in a portrait layout, if the large screen is divided into three small screens in the vertical direction, the aspect ratio of the small screens after the dividing is 9:16/3=16:9.48. With respect to the original video content at 16:9, the ratio in inches is 9/16=56.25% (the area ratio is (9/16).sup.2=31.64%). Accordingly, the usable display region can be efficiently used.
CREATIVE INTENT SCALABILITY VIA PHYSIOLOGICAL MONITORING
Creative intent input describing emotion expectations and narrative information relating to media content is received. Expected physiologically observable states relating to the media content are generated based on the creative intent input. An audiovisual content signal with the media content and media metadata comprising the physiologically observable states is provided to a playback apparatus. The audiovisual content signal causes the playback device to use physiological monitoring signals to determine, with respect to a viewer, assessed physiologically observable states relating to the media content and generate, based on the expected physiologically observable states and the assessed physiologically observable states, modified media content to be rendered to the viewer.
Cloud-based Production of High-Quality Virtual And Augmented Reality Video Of User Activities
A computer-implemented method includes obtaining first data, including telemetry data and beatmap synchronization data from a user device such as a virtual reality (VR) headset. The telemetry data relates to actions and/or movements of a person wearing the user device in a real-world environment. The telemetry data and beatmap synchronization data are used to produce one more video segments of a virtual person in a virtual world environment. The video production may take place in the cloud, away from the user device. The video production may include post-production and visual effects. The video production may be higher quality and/or resolution than images displayed in real-time on the user device.