H04N21/44218

Media content based on playback zone awareness
11556998 · 2023-01-17 · ·

Systems and methods are provided for providing media content based on playback zone awareness. In one aspect, a computing system receives, via a network interface, zone data from the media playback system, wherein the zone data includes an indication of a particular zone of the media playback system, and wherein the particular zone comprises at least one playback device. The computing system identifies audio content based on (i) the indication of the particular zone and (ii) contextual data associated with the particular zone, and provides, via the network interface, an indication of the identified audio content to the media playback system.

Systems and methods to enhance interactive engagement with shared content by a contextual virtual agent

Systems and methods are described to enhance interactive engagement during simultaneous delivery of serial or digital content (e.g., audio, video) to a plurality of users. A machine-based awareness of the context of the content and/or one or more user reactions to the presentation of the content may be used as a basis to interrupt content delivery in order to intersperse a snippet that includes a virtual agent with an awareness of the context(s) of the content and/or the one or more user reactions. This “contextual virtual agent” (CVA) enacts actions and/or dialog based on the one or more machine-classified contexts coupled with identified interests and/or aspirations of individuals within the group of users. The CVA may also base its activities on a machine-based awareness of “future” content that has not yet been delivered to the group, but classified by natural language and/or computer vision processing. Interrupting the delivery of content substantially simultaneously to a group of users and initiating dialog regarding content by a CVA enhances opportunities for users to engage with each other about their shared interactive experience.

ELECTRONIC DEVICE, SERVER AND METHODS FOR VIEWPORT PREDICTION BASED ON HEAD AND EYE GAZE

A method performed by an electronic device for requesting tiles relating to a viewport of an ongoing omnidirectional video stream is provided. The ongoing omnidirectional video stream is provided by a server to be displayed to a user of the electronic device. The electronic device predicts for an impending time period, a future head gaze of the user in relation to a current head gaze of the user, based on: A current head gaze relative to a position of shoulders of the user, a limitation of the head gaze of the user bounded by the shoulders position of the user, and a current eye gaze and eye movements of the user. The electronic device then sends a request to the server. The request requests tiles relating to the viewport for the impending time period, selected based on the predicted future head gaze of the user.

A Method, An Apparatus and a Computer Program Product for Video Encoding and Video Decoding
20230012201 · 2023-01-12 ·

The embodiments relate to a method including generating a bitstream defining a presentation including an omnidirectional visual media content; encoding into the bitstream a parameter to indicate viewport-control options for viewing the presentation, wherein the viewport-control options includes options controllable by a receiving device and options not-controllable by the receiving device and sending the bitstream to the receiver device; receiving one of the indicated viewport-control options from the receiver device as a response; streaming the presentation to the receiver device; when the response has included an indication on a viewport-control controllable by the receiving device, the method also includes receiving information on viewport definitions from the receiver device during streaming of the presentation and adapting the presentation accordingly; when the response has included an indication on a viewport-control not- controllable by the receiving device, the presentation is streamed to the receiver device according to the viewport-control specified in the response.

Systems and methods for video delivery based upon saccadic eye motion

A method is provided for displaying an immersive video content according to eye movement of a viewer includes the steps of detecting, using an eye tracking device, a field of view of at least one eye of the viewer, transmitting eye tracking coordinates from the detected field of view to an eye tracking processor, identifying a region on a video display corresponding to the transmitted eye tracking processor, adapting the immersive video content from a video storage device at a first resolution for a first portion of the immersive video content and a second resolution for a second portion of the immersive video content, the first resolution being higher than the second resolution, displaying the first portion of the immersive video content on the video display within a zone, and displaying the second portion of the immersive video content on the video display outside of the zone.

Secondary content insertion in 360-degree video

A secondary content such as an advertisement may be inserted based on users' interests in 360 degree video streaming. Users may have different interests and may watch different areas within a 360 degree video. The information about area(s) of 360 degree scenes that users watch the most may be used to select an ad(s) relevant to their interests. One or more secondary content viewports may be defined within a 360 degree video frame. Secondary content viewport parameter(s) may be tracked. For example, statistics of the user's head orientation for some time leading to tile presentation of the ad(s) may be collected. Secondary content may be determined based on the tracked secondary content viewport parameters).

Display device and controlling method of display device

A display device and a method capable of rotating a display based on a type of a user command are provided. The display device according to the disclosure receives a user command while first content is displayed on the display, the display being configured to operate in a first orientation while displaying the first content, maintains the display to operate in the first orientation when the received user command is a command to control a feature corresponding to the first content, determines, based on a type of a second content, to control the display to operate in the first orientation or a second orientation different from the first orientation when the received user command is a command to display the second content on the display, and controls the motor to rotate the display based on the determined first orientation or the second orientation.

Methods and apparatus to determine an audience composition based on voice recognition

Methods, apparatus, systems and articles of manufacture are disclosed. An example apparatus includes a controller to cause a people meter to emit a prompt for input of audience identification information at a first time and determine a first audience count based on the input, an audio detector to determine a second audience count based on signatures generated from audio data captured in the media environment, and a comparator to cause the people meter to not emit the prompt for at least a first time period after the first time when the first audience count is equal to the second audience count.

SYSTEM AND METHOD FOR PROVIDING CONTENT IN AUTONOMOUS VEHICLES BASED ON PERCEPTION DYNAMICALLY DETERMINED AT REAL-TIME
20180007414 · 2018-01-04 ·

In one embodiment, an image analysis is performed on an image captured using a camera mounted on an autonomous vehicle, the image representing an exterior environment of an autonomous vehicle. Localization information surrounding the autonomous vehicle is obtained at a point in time. A perception of an audience external to the autonomous vehicle is determined based on the image analysis and the localization information. One or more content items are received from one or more content servers over a network in response to the perception of the audience. A first content item selected from the one or more content items is displayed on a display device mounted on an exterior surface of the autonomous vehicle.

Systems and Methods for Assessing Viewer Engagement
20180007431 · 2018-01-04 ·

A system for quantifying viewer engagement with a video playing on a display includes at least one camera to acquire image data of a viewing area in front of the display. A microphone acquires audio data emitted by a speaker coupled to the display. The system also includes a memory to store processor-executable instructions and a processor. Upon execution of the processor-executable instructions, the processor receives the image data and the audio data and determines an identity of the video displayed on the display based on the audio data. The processor also estimates a first number of people present in the viewing area and a second number of people engaged with the video. The processor further quantifies the viewer engagement of the video based on the first number of people and the second number of people.