Patent classifications
H04N21/44008
METHOD FOR PLACING DELIVERY INFORMATION, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure relates to a method for placing delivery information, an electronic device and a storage medium. The method includes: dividing a candidate video for placing the delivery information into a plurality of video frames; determining a plurality of key frames from the plurality of video frames based on a correlation degree of every two adjacent video frames; determining a target frame from the plurality of key frames based on existing conversion information corresponding to each of the plurality of key frames and sampling conversion information corresponding to the each of the plurality of key frames; and placing a delivery component corresponding to the target frame into the target delivery frame.
SYSTEMS AND METHODS FOR RECOMMENDING CONTENT USING PROGRESS BARS
During playback of a content item, a media signature corresponding to a first portion of the content item is identified. A number of media signatures representing portions of a plurality of other content items may have been previously identified and stored. Each stored media signature may also include an identifier of an associated content item and a timestamp corresponding to a position in the associated content item at which the signature is located. If it is determined that the identified media signature matches a stored media signature, a progress bar is generated for display comprising an identifier of the content item associated with the matching stored media signature, and a progress indicator corresponding to a timestamp associated with the stored media signature.
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
An information processing system for obtaining an audio content file for video data providing video content representing a sport event, including: a receiver configured to receive a data stream including the video data; a preference data obtainer configured to obtain preference data, wherein the preference data indicate a selected competitor participating in the sport event; a category identifier obtainer configured to obtain a category identifier from a machine learning algorithm into which the video data is input, wherein the machine learning algorithm is trained to classify a scene represented in the video content into a category of a predetermined set of categories associated with the sport event, wherein the category identifier indicates the category into which the scene is classified; an audio content file obtainer configured to obtain, based on the obtained category identifier and the obtained preference data, the audio content file from a prestored set of audio content files, wherein the audio content file provides audio content associated with the category of the scene and the preference data; and a synchronizer configured to synchronize the audio content and the video content for synchronized play back of the scene by a media player configured to play back the video content and the audio content file.
SYSTEMS AND METHODS FOR DETERMINING TYPES OF REFERENCES IN CONTENT AND MAPPING TO PARTICULAR APPLICATIONS
Systems and methods are provided herein for determining types of references within a content item and mapping them to particular applications. A content management application identifies an entity and a context of the entity at a location within the content item. The content management application may identify the entity and the context of the entity in real time as a first user device processes the content item, or the content management application may identify and store the entity and the context of the entity in a database before providing the content item. After determining a presence of a second user device associated with a profile, the content management application determines at least one application associated with the entity and the context of the entity on the second user device and launches the application to create an immersive content consumption experience.
CONSTRUCTION OF ENVIRONMENT VIEWS FROM SELECTIVELY DETERMINED ENVIRONMENT IMAGES
A computing system may include a client device and a server. The client device may be configured to access a stream of image frames that depict an environment, determine, from the stream of image frames, environment images that satisfy selection criteria, and transmit the environment images to the server. The server may be configured to receive the environment images from the client device, construct a spatial view of the environment based on position data included with the environment images, and navigate the spatial view, including by receiving a movement direction and progressing from a current environment image depicted for the spatial view to a next environment image based on the movement direction.
Dynamic content serving using a media device
Methods, systems, devices, and computer-program products are described herein for providing dynamic content serving. The dynamic content serving technology can identify, in real-time, programming arriving at a client device, identify a specific media segment being received and/or displayed, and determine which pre-stored substitute media segment may be used to replace the identified segment. A picture-in-picture channel can be used to display the substitute media segment.
Methods, devices, and systems for embedding visual advertisements in video content
Aspects of the subject disclosure may include, for example, embodiments include obtaining video content, the video content comprises a plurality of frames, monitoring, by an image sensor, a facial feature of a user to determine a visual focus of the user in relation to the video content, and detecting from a group of frames of the plurality of frames at least a reduction in movements of objects in the group of frames. Further embodiments include determining, according to the monitoring and the detecting, a measure of attention of the user within a region of the group of frames, determining that the measure of attention of the user within the region of the group of frames satisfies a threshold, and embedding in at least a portion of subsequent frames of the plurality of frames a visual advertisement in the region via a communication device. Other embodiments are disclosed.
Generating structured data from screen recordings
Generating structured data from screen recordings is disclosed, including: obtaining, from a client device, a screen recording of a user's activities on the client device with respect to a task; performing, at a server, video validation on the screen recording, including by determining whether the screen recording matches a set of validation parameters associated with the task; and generating a set of structured data based at least in part on the video validation.
Method and apparatus for locating video playing node, device and storage medium
The disclosure provides a method for locating a video playing node, and relates to fields of big data and video processing. The method includes: selecting a target video out from a plurality of videos; and sending the target video, a plurality of subtitle text segments of the target video and start time information of each of the plurality of subtitle text segments to a client, to cause the client to display the plurality of subtitle text segments, and determine, in response to a trigger operation on an any subtitle text segment of the plurality of subtitle text segments, a start playing node of the target video based on the start time information of the any subtitle text segment. The disclosure further provides an apparatus for locating a video playing node, an electronic device and a storage medium.
Target character video clip playing method, system and apparatus, and storage medium
Provided are a target character video clip playing method, system and apparatus, and a storage medium. The method comprises: using image recognition technology to perform target character recognition on an entire video, positioning a plurality of video clips containing target characters, and obtaining a first playing time period set corresponding to the video clips; according to audio clips corresponding to each character marked within the entire video, obtaining a second playing time period set corresponding to the audio clips of the various characters; merging the time periods included in the playing time period sets, and obtaining a sum playing time period set of the target characters; according to a sorting of various playing timelines within the sum playing time period set, performing video playing of the target characters.