Patent classifications
H04N21/00
Generating a video frame for a user interface operation during content presentation
In some implementations, a device includes one or more processors and a non-transitory memory. In some implementations, a method includes obtaining a request to perform a user interface operation at a client device while the client device is playing a media content item in a buffered content presentation mode. In some implementations, the method includes identifying a first image that represents a current playback position of the media content item at the client device. In some implementations, the method includes identifying a second image that represents a user interface element associated with the user interface operation. In some implementations, the method includes generating a first video frame that corresponds to the user interface operation by blending the first image with the second image. In some implementations, the method includes triggering the client device to present the first video frame in a real-time content presentation mode.
Generating a video frame for a user interface operation during content presentation
In some implementations, a device includes one or more processors and a non-transitory memory. In some implementations, a method includes obtaining a request to perform a user interface operation at a client device while the client device is playing a media content item in a buffered content presentation mode. In some implementations, the method includes identifying a first image that represents a current playback position of the media content item at the client device. In some implementations, the method includes identifying a second image that represents a user interface element associated with the user interface operation. In some implementations, the method includes generating a first video frame that corresponds to the user interface operation by blending the first image with the second image. In some implementations, the method includes triggering the client device to present the first video frame in a real-time content presentation mode.
Devices, methods, and graphical user interfaces for moving a current focus using a touch-sensitive remote control
An electronic device provides, to a display, data to present a user interface that includes a plurality of user interface objects, and a current focus on a first user interface object of the plurality of user interface objects. While the display is presenting the user interface, the electronic device receives from the user input device an input that corresponds to a gesture detected on the touch-sensitive surface of the user input device. The gesture includes a movement of a contact across the touch-sensitive surface followed by a lift-off of the contact from the touch-sensitive surface. The electronic device, in accordance with a determination that the gesture satisfies movement criteria, provides, to the display, data to move the current focus in the user interface from the first user interface object to a second user interface object of the plurality of user interface objects.
Devices, methods, and graphical user interfaces for moving a current focus using a touch-sensitive remote control
An electronic device provides, to a display, data to present a user interface that includes a plurality of user interface objects, and a current focus on a first user interface object of the plurality of user interface objects. While the display is presenting the user interface, the electronic device receives from the user input device an input that corresponds to a gesture detected on the touch-sensitive surface of the user input device. The gesture includes a movement of a contact across the touch-sensitive surface followed by a lift-off of the contact from the touch-sensitive surface. The electronic device, in accordance with a determination that the gesture satisfies movement criteria, provides, to the display, data to move the current focus in the user interface from the first user interface object to a second user interface object of the plurality of user interface objects.
VIRTUAL REALITY RESOURCE SCHEDULING OF PROCESSES IN A CLOUD-BASED VIRTUAL REALITY PROCESSING SYSTEM
Systems and methods are disclosed to receive a request for a virtual reality render project that includes information specifying virtual reality video data to be used to create a virtual reality render; determine a plurality of virtual reality jobs required to create the virtual reality render from the virtual reality video data; determine the availability of a plurality of virtual reality nodes across the network; create a virtual reality render map that specifies a processing sequence of the plurality of virtual reality jobs across the one or more virtual reality nodes to create the virtual reality render, the virtual reality render map being created based on at least the availability of the plurality of virtual reality nodes; and process the plurality of virtual reality jobs at the plurality of virtual reality nodes to create the virtual reality.
Display dependent analytics
Apparatus and methods are disclosed for display-related analysis of call data in an IPBX. In an example embodiment, an apparatus is configured to route data/VoIP calls via a data-communications server. An interface circuit is configured to selected parameters of interest based on capabilities of a set of devices and generate subscription requests to subscribe the devices to the parameters of interest. A processing circuit is configured to generate call summary metrics from call event messages for calls routed by the server and with subscription requests being associated with the parameters of interest. The call summary metrics are evaluated in connection with the parameters of interest as subscribed to by the devices and results of the evaluation are provided to the communication devices.
Display dependent analytics
Apparatus and methods are disclosed for display-related analysis of call data in an IPBX. In an example embodiment, an apparatus is configured to route data/VoIP calls via a data-communications server. An interface circuit is configured to selected parameters of interest based on capabilities of a set of devices and generate subscription requests to subscribe the devices to the parameters of interest. A processing circuit is configured to generate call summary metrics from call event messages for calls routed by the server and with subscription requests being associated with the parameters of interest. The call summary metrics are evaluated in connection with the parameters of interest as subscribed to by the devices and results of the evaluation are provided to the communication devices.
CONTROL METHOD FOR IMAGE OUTPUT DEVICE AND DISPLAY SYSTEM
A control method for an image output device includes performing wireless communication with a display device using a first frequency band and a second frequency band, acquiring communication traffic of the first frequency band and communication traffic of the second frequency band, and, when the communication traffic of the first frequency band is smaller than the communication traffic of the second frequency band, transmitting audio data using the first frequency band and transmitting image data using the second frequency band.
SLOW-MOTION DEVICE
The invention relates to a device for detecting, storing and displaying digital images having an image acquisition unit for acquiring an image sequence of an object arranged in the detection field of the image acquisition unit, a storage unit for saving the image sequence of the object, a display unit for displaying the image sequence and a control unit for controlling the image acquisition unit, the storage unit and/or the display unit. The control unit acts on the image acquisition unit with image control means such that during a given image acquisition interval, the image sequence detected by the image acquisition unit is provided as a slow-motion image sequence to the storage unit. The same is displayed on the display unit, and can be saved in the storage unit. Thereby, the memorable nature of a celebration is further increased.
Systems and Methods for Creating an Immersive Video Content Environment
There is provided a system including a non-transitory memory storing an executable code and a hardware processor executing the code to receive a video content including a plurality of frames showing a scene from a perspective of a real camera, create a three-dimensional (3D) model of the scene using the plurality of frames, store the 3D model of the scene in the non-transitory memory, construct a synthetic view of the scene showing additional perspectives from one or more virtual cameras at one or more locations in the scene, transmit the synthetic view of the scene for being displayed on a display, display a scene of the video content on the display, track a position of a viewer moving in a room, and adjust the display of the scene being displayed on the display based on the position of the viewer in the room relative to the display.