Patent classifications
G06F16/483
Analyzing Objects Data to Generate a Textual Content Reporting Events
Systems, methods and non-transitory computer readable media for analyzing objects data to generate a textual content reporting events are provided. An indication of an event may be received. An indication of a group of one or more objects associated with the event may be received. For each object of the group of one or more objects, data associated with the object may be received. The data associated with the group of one or more objects may be analyzed to select an adjective. A particular description of the event may be generated. The particular description may be based on the group of one or more objects. The particular description may include the selected adjective. A textual content may be generated. The textual content may include the particular description. The generated textual content may be provided.
System and method for automatic synchronization of video with music, and gaming applications related thereto
A computer system including a server having a processor and a memory, the memory having a video database and a music database, the video database storing at least one video file having a plurality of video file markers, and the music database storing at least one music file having a plurality of music file markers, wherein the server receives and decodes encoded data from computer readable code, identifies and retrieves from the music database a music file based on the decoded data, synchronizes the retrieved music file with one of the video files by aligning the video file markers of the video file with the music file markers for the retrieved music file to produce a synchronized video-music file, and transmits the synchronized video-music file to a display, wherein the video file markers are generated for each video file and the music file markers are generated for each music file.
System and method for automatic synchronization of video with music, and gaming applications related thereto
A computer system including a server having a processor and a memory, the memory having a video database and a music database, the video database storing at least one video file having a plurality of video file markers, and the music database storing at least one music file having a plurality of music file markers, wherein the server receives and decodes encoded data from computer readable code, identifies and retrieves from the music database a music file based on the decoded data, synchronizes the retrieved music file with one of the video files by aligning the video file markers of the video file with the music file markers for the retrieved music file to produce a synchronized video-music file, and transmits the synchronized video-music file to a display, wherein the video file markers are generated for each video file and the music file markers are generated for each music file.
COMPUTING DEVICE AND CORRESPONDING METHOD FOR GENERATING DATA REPRESENTING TEXT
An example method involves (i) accessing first data representing text, wherein the text defines at least one position representing a particular type of grammatical break between two portions of the text; (ii) identifying, from among the at least one position, a position that is closest to a target position within the text; (iii) based on the identified position within the text, generating second data that represents a proper subset of the text, wherein the proper subset extends from an initial position within the text to the identified position within the text; and (iv) providing output based on the generated second data.
METHOD AND APPARATUS FOR CLOUD STREAMING SERVICE
A method and apparatus are provided for a cloud streaming service. A cloud streaming server receives first data corresponding to media source extension (MSE) media from a media source server when a request for content is received from a user device. Then the cloud streaming server creates a first stream by transcoding the first data to a suitable format for processing at the user device, and transmits the created first stream to the user device. Further, the cloud streaming server receives second data corresponding to remaining data except the first data in the content, outputs an execution screen of the content by executing the second data, captures the outputted execution screen, and creates a second stream by encoding the captured screen.
VOICE QUERY REFINEMENT TO EMBED CONTEXT IN A VOICE QUERY
Systems and methods are described for providing contextual search results. The system may receive a search query during presentation of a video. If the query is ambiguous, the system accesses some of the frames of the video. The frames are analyzed to identify a performed action depicted in the frames. The system retrieves a keyword related to the identified action.
The ambiguous query is augmented with the keyword. The augmented search query is used to search for and output relevant search results.
Spoiler prevention
Methods, systems and computer program products are provided for spoiler prevention. Media consumption applications may be placed in “spoiler-free” mode, for example, to prevent media content from spoiling first-hand user experience. A user may provide and/or authorize access to and use of spoiler prevention information. A user may request media content (e.g., while surfing the Internet). Digital media content to be presented to a user may be searched in real-time and/or pre-searched for spoiler content and/or associated spoiler indications relative to spoiler prevention information. Identified spoiler content may be concealed from users. A procedure may be provided for users to determine one or more reasons why content is concealed, to selectively reveal concealed content, and to provide feedback whether concealed content was or was not spoiler content for a user. Feedback may be used to improve spoiler prevention, for example, by retraining a machine learning model, which may be user-specific.
Spoiler prevention
Methods, systems and computer program products are provided for spoiler prevention. Media consumption applications may be placed in “spoiler-free” mode, for example, to prevent media content from spoiling first-hand user experience. A user may provide and/or authorize access to and use of spoiler prevention information. A user may request media content (e.g., while surfing the Internet). Digital media content to be presented to a user may be searched in real-time and/or pre-searched for spoiler content and/or associated spoiler indications relative to spoiler prevention information. Identified spoiler content may be concealed from users. A procedure may be provided for users to determine one or more reasons why content is concealed, to selectively reveal concealed content, and to provide feedback whether concealed content was or was not spoiler content for a user. Feedback may be used to improve spoiler prevention, for example, by retraining a machine learning model, which may be user-specific.
MEDIA FILE GENERATION APPARATUS, MEDIA FILE PLAYBACK APPARATUS, MEDIA FILE GENERATION METHOD, MEDIA FILE PLAYBACK METHOD, PROGRAM, AND STORAGE MEDIUM
A plurality of pieces of image data and audio data are determined from a data area. Information on a slideshow group associated with a plurality of pieces of identification information identifying the respective pieces of image data and identification information identifying the audio data, and location information indicating locations, in the data area, of the plurality of pieces of image data and the audio data are stored in a metadata area. The metadata, the plurality of pieces of image data, and the audio data are stored in a single media file.
MEDIA FILE GENERATION APPARATUS, MEDIA FILE PLAYBACK APPARATUS, MEDIA FILE GENERATION METHOD, MEDIA FILE PLAYBACK METHOD, PROGRAM, AND STORAGE MEDIUM
A plurality of pieces of image data and audio data are determined from a data area. Information on a slideshow group associated with a plurality of pieces of identification information identifying the respective pieces of image data and identification information identifying the audio data, and location information indicating locations, in the data area, of the plurality of pieces of image data and the audio data are stored in a metadata area. The metadata, the plurality of pieces of image data, and the audio data are stored in a single media file.