Patent classifications
H04L65/60
Systems and methods of universal video embedding
Systems and methods described in this application are directed to universal online video embedding through a single platform. Videos are stored all over the internet in all kinds of different formats across a wide variety of video platforms, websites, and video publishers that makes video content available online. Systems and methods of the inventive subject matter facilitate handling and embedding of videos from any number of different video sources through a single platform by, for example: initializing known video platforms having available APIs or SDKs to streamline embedding of those videos, and, in the absence of an API or SDK, the service platform can go through several steps to determine how best to present the video to a client, whether that involves embedding the video or executing a callback to cause an end-user application to open a webpage URL in a web browser to access the video.
Systems and methods of universal video embedding
Systems and methods described in this application are directed to universal online video embedding through a single platform. Videos are stored all over the internet in all kinds of different formats across a wide variety of video platforms, websites, and video publishers that makes video content available online. Systems and methods of the inventive subject matter facilitate handling and embedding of videos from any number of different video sources through a single platform by, for example: initializing known video platforms having available APIs or SDKs to streamline embedding of those videos, and, in the absence of an API or SDK, the service platform can go through several steps to determine how best to present the video to a client, whether that involves embedding the video or executing a callback to cause an end-user application to open a webpage URL in a web browser to access the video.
Filtering video content items
Methods and systems for filtering video content items are described herein. The system identifies a plurality of video content items that are linked to respective image content items. The system determines, for each of the plurality of video content items, whether a video content item corresponds to a respective image content item. The system causes to be provided information identifying the plurality of video content items. For each video content item of the plurality of video content items that corresponds to a respective image content item, the system causes to be provided an indicator that correspondence has been verified.
Filtering video content items
Methods and systems for filtering video content items are described herein. The system identifies a plurality of video content items that are linked to respective image content items. The system determines, for each of the plurality of video content items, whether a video content item corresponds to a respective image content item. The system causes to be provided information identifying the plurality of video content items. For each video content item of the plurality of video content items that corresponds to a respective image content item, the system causes to be provided an indicator that correspondence has been verified.
IN-BAND VIDEO COMMUNICATION
A method for video management within a CCTV system includes receiving, at a computing device via one or more intermediate devices in the CCTV system, a video stream generated by a sensor device of the CCTV system. The video stream includes a plurality of video frames. The computing device sends, via the one or more intermediate devices of the CCTV system, an instruction to a sensor device configured to generate a video stream including a plurality of video frames. The computing device receives, via the one or more intermediate devices of the CCTV system, one or more frames of the plurality of video frames embedded with metadata associated with performance of the instruction by the sensor device. Performance of the CCTV system is evaluated using the metadata embedded within the one or more video frames.
IN-BAND VIDEO COMMUNICATION
A method for video management within a CCTV system includes receiving, at a computing device via one or more intermediate devices in the CCTV system, a video stream generated by a sensor device of the CCTV system. The video stream includes a plurality of video frames. The computing device sends, via the one or more intermediate devices of the CCTV system, an instruction to a sensor device configured to generate a video stream including a plurality of video frames. The computing device receives, via the one or more intermediate devices of the CCTV system, one or more frames of the plurality of video frames embedded with metadata associated with performance of the instruction by the sensor device. Performance of the CCTV system is evaluated using the metadata embedded within the one or more video frames.
Shared speech processing network for multiple speech applications
A device to process speech includes a speech processing network that includes an input configured to receive audio data corresponding to audio captured by one or more microphones. The speech processing network also includes one or more network layers configured to process the audio data to generate a network output. The speech processing network includes an output configured to be coupled to multiple speech application modules to enable the network output to be provided as a common input to each of the multiple speech application modules. A first speech application module corresponds to a speaker verifier, and a second speech application module corresponds to a speech recognition network.
Shared speech processing network for multiple speech applications
A device to process speech includes a speech processing network that includes an input configured to receive audio data corresponding to audio captured by one or more microphones. The speech processing network also includes one or more network layers configured to process the audio data to generate a network output. The speech processing network includes an output configured to be coupled to multiple speech application modules to enable the network output to be provided as a common input to each of the multiple speech application modules. A first speech application module corresponds to a speaker verifier, and a second speech application module corresponds to a speech recognition network.
SYNCHRONIZING FILTER METADATA WITH A MULTIMEDIA PRESENTATION
A method, system and apparatus for applying and synchronizing filter information with a multimedia presentation, such as a movie provided in a video-on-demand context, to suppress objectionable content. In one example, filter information, which includes an indicia of a portion of the multimedia presentation including objectionable content and a type of suppression action, is provided on either a set-top-box or a video-on-demand server. A user selects a particular video-on-demand presentation, and the selection is transmitted to the set-top-box. Additionally, whether in a video-on-demand, DVD, or other environment it may be necessary to synchronize filter with the multimedia content so that the proper objectionable content is suppressed.
SYNCHRONIZING FILTER METADATA WITH A MULTIMEDIA PRESENTATION
A method, system and apparatus for applying and synchronizing filter information with a multimedia presentation, such as a movie provided in a video-on-demand context, to suppress objectionable content. In one example, filter information, which includes an indicia of a portion of the multimedia presentation including objectionable content and a type of suppression action, is provided on either a set-top-box or a video-on-demand server. A user selects a particular video-on-demand presentation, and the selection is transmitted to the set-top-box. Additionally, whether in a video-on-demand, DVD, or other environment it may be necessary to synchronize filter with the multimedia content so that the proper objectionable content is suppressed.