Patent classifications
H04N21/4666
Deep reinforcement learning for personalized screen content optimization
Systems and methods are described for selecting content item identifiers for display. The system may identify a set of content items that are likely to be requested in the future based on a history of content item requests. The system then selects a first plurality of content categories using a category selection neural net and selects a first set of recommended content items for the first plurality of content categories. The system increases a reward score for the first plurality of content categories based on receiving a request for a content item that is included in the first set of recommended content items. The system also decreases the reward score for the first plurality of content categories based on determining that the requested content item is included in the set of content items that are likely to be requested in the future. The neural net is trained based on the reward score of the first plurality of content categories to reinforce reward score maximization. The trained neural net is the used to select content items for display.
Method, server and computer-readable medium for recommending nodes of interactive content
Disclosed are a method, a server and a computer-readable medium for recommending nodes of an interactive content, in which, when receiving recommendation request information for requesting a recommendation node for a specific node included in an interactive content from a user generating the interactive content, a first embedding value for a first set including the specific node is calculated, and a second embedding value for each second set including each of a plurality of nodes of each of one or more other interactive contents included in the service server is calculated, so as to calculate a similarity between the first embedding value and the second embedding value and provide the user with a next node, as a recommendation node, of a node corresponding to the second embedding value determined based on the similarity.
VIDEO CLIPPING METHOD AND MODEL TRAINING METHOD
Provided are a video clipping and model training method, relating to the field of video technologies, and in particular, to the field of short video technologies. The video clipping method includes: acquiring interaction behavior data for an original video file; determining interaction heat at respective time points of the original video file, according to the interaction behavior data; selecting N time points with highest interaction heat, to take the selected time points as interest points of the original video file, where N is a positive integer; and clipping the original video file based on the respective interest points, to obtain N clipped video files. Therefore, high-quality short video files can be generated.
SYSTEMS AND METHODS FOR GENERATING A PERSONALITY PROFILE BASED ON USER DATA FROM DIFFERENT SOURCES
Implementations claimed and described herein provide systems and methods for behavior assessment for an individual. In one implementation, user data for the individual is obtained from one or more digital sources. Categorized user data is created by transforming the user data into a platform independent format. The categorized user data is associated with a plurality of content-based bins. One or more behavioral insight categories are determined from the categorized user data. A plurality of behavioral metrics is determined based on the one or more behavioral insight categories and the categorized user data. A personality profile for the individual is generated by converting the plurality of behavioral metrics into one or more scores. A risk assessment for the individual is generated based on the personality profile.
Apparatus and method for generating a video record using audio
An apparatus and method for generating a video record using audio is presented. The apparatus comprises at least a processor and a memory communicatively connected to the at least a processor. The memory contains instructions configuring the at least a processor to receive a user input from a user, select a set of record generation questions for the user as a function of the user input, transmit an audio question to the user as a function of the selected set of record generation questions, record a user response as a function of the audio question, and generate a video record as a function of the recorded user responses.
Media manipulation using cognitive state metric analysis
Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.
METHODS AND APPARATUS FOR MEASURING ENGAGEMENT DURING MEDIA EXPOSURE
Methods, apparatus, systems, and articles of manufacture are disclosed for measuring engagement during media exposure. An example apparatus includes at least one memory, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to identify media presented via a media device in a media presentation environment, identify ambient audio detected in the media presentation environment, determine whether the ambient audio is distractive to presentation of the media in the media presentation environment, and adjust a media exposure report based on a determination that the ambient audio is distractive.
Automatic trailer detection in multimedia content
The disclosed computer-implemented method may include accessing media segments that correspond to respective media items. At least one of the media segments may be divided into discrete video shots. The method may also include matching the discrete video shots in the media segments to corresponding video shots in the corresponding media items according to various matching factors. The method may further include generating a relative similarity score between the matched video shots in the media segments and the corresponding video shots in the media items, and training a machine learning model to automatically identify video shots in the media items according to the generated relative similarity score between matched video shots. Various other methods, systems, and computer-readable media are also disclosed.
Video jitter detection method and apparatus
The present disclosure provides a video jitter detection method and an apparatus. The video jitter detection method includes: acquiring a video; inputting the video into a detection model to obtain an evaluation value of the video, where the evaluation value is used to indicate a degree of jitter of the video; where the detection model is a model obtained by training using video samples in a video sample set as inputs and evaluation values of the video samples in the video sample set as outputs. By inputting the video to be detected into the detection model, the evaluation value of the video can be acquired through the detection model, thereby whether the video is jittery is determined, which realizes the video jitter detection end-to-end, and improves the detection accuracy and robustness of video jitter.
Video recommendation method and device, computer device and storage medium
A video recommendation method is provided, including: inputting a video to a first feature extraction network, performing feature extraction on at least one consecutive video frame in the video, and outputting a video feature of the video; inputting user data of a user to a second feature extraction network, performing feature extraction on the discrete user data, and outputting a user feature of the user; performing feature fusion based on the video feature and the user feature, and obtaining a recommendation probability of recommending the video to the user; and determining, according to the recommendation probability, whether to recommend the video to the user.