Patent classifications
G06F16/41
INFORMATION PROCESSING APPARATUS AND FILE RECORDING METHOD
Before a file logically divided into a plurality of groups is downloaded, an acquisition section acquires meta information set for the groups. An area management section reserves recording areas in a first storage and a second storage according to the meta information of the groups. A recording processing section records the groups into the first storage or the second storage according to the meta information of the groups.
System and method for intelligent prioritization of media related to an incident
Techniques for prioritization of media related to an incident are provided. Confirmed incident related media may be retrieved, the confirmed incident related media having been confirmed as being associated with the incident. Artifacts of interest may be identified in the confirmed incident related media. Presence of the artifacts of interest in a plurality of received media may be determined. The plurality of received media may be prioritized based on the presence of the artifacts of interest.
System and method for intelligent prioritization of media related to an incident
Techniques for prioritization of media related to an incident are provided. Confirmed incident related media may be retrieved, the confirmed incident related media having been confirmed as being associated with the incident. Artifacts of interest may be identified in the confirmed incident related media. Presence of the artifacts of interest in a plurality of received media may be determined. The plurality of received media may be prioritized based on the presence of the artifacts of interest.
Systems And Methods For Recording Relevant Portions Of A Media Asset
Systems and methods are presented herein for recording portions of a media asset relevant to recording criteria. A media application receives input indicating the recording criteria and identifying a first keyword. The media application accesses a data structure to identify a first node associated with the first keyword. The data structure includes the first node and a plurality of nodes connected to the first node via a plurality of paths. The media application receiving audio component data for a portion of the media asset extracts a term from the audio component data, and identifies a second node in the data structure that is associated with the extracted term. The media application calculates a path score for the portion of the media asset based on a path size in the data structure between the first node and the second node. When the score is high enough, the portion of the media asset is recorded.
PROVIDING SHARED CONTENT COLLECTIONS WITHIN A MESSAGING SYSTEM
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing shared content collections. The program and method provide for receiving, from a first device of a first user, an indication of first user input to share a content collection between the first user and a second user selected by the first user, the content collection comprising at least one media content item, the second user corresponding to a contact of the first user; storing the content collection in association with the first user and the second user; receiving an indication of second user input to share the content collection with a third user selected by the second user, the third user corresponding to a contact of the second user; and associating the content collection with the third user.
PROVIDING SHARED CONTENT COLLECTIONS WITHIN A MESSAGING SYSTEM
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing shared content collections. The program and method provide for receiving, from a first device of a first user, an indication of first user input to share a content collection between the first user and a second user selected by the first user, the content collection comprising at least one media content item, the second user corresponding to a contact of the first user; storing the content collection in association with the first user and the second user; receiving an indication of second user input to share the content collection with a third user selected by the second user, the third user corresponding to a contact of the second user; and associating the content collection with the third user.
Automatic annotation for vehicle damage
Aspects described herein may allow an automated generation of an interactive multimedia content with annotations showing vehicle damage. In one method, a server may receive vehicle-specific identifying information of a vehicle. Image sensors may capture multimedia content showing aspects associated with exterior regions of the vehicle, and may send the multimedia content to the server. For each of the exterior regions of the vehicle, the server may determine, using a trained classification model, instances of damage. Furthermore, the server may generate an interactive multimedia content that shows images with annotations indicating instances of damage. The interactive multimedia content may be displayed via a user interface.
Processing audio and video
A wearable device may include an image sensor configured to capture a plurality of images from an environment, a microphone configured to capture sounds from the environment, and at least one processor. The at least one processor may be programmed to receive audio signals representative of the sounds captured by the at least one microphone, and receive a first image including a representation of a first individual from among the plurality of images captured by the image sensor. The at least one processor may also be programmed to obtain a first audio segment from the audio signals using the first image. The first audio segment may include a first portion of the audio signals in which the first individual is speaking. The at least one processor may also be programmed to receive a second image including a representation of a second individual from among the plurality of images captured by the image sensor, and obtain a second audio segment from the audio signals using the second image. The second audio segment may include a second portion of the audio signals in which the second individual is speaking. The at least one processor may also be programmed to receive a third image including a representation of the first individual from among the plurality of images captured by the image sensor, and using the third image, obtain a third audio segment from the audio signals. The audio segment may include a third portion of the audio signals in which the first individual is speaking. The at least one processor may also associate the first and third audio segments with the first individual and associate the second audio segment with the second individual.
APPARATUS AND METHOD FOR FORMING CONNECTIONS WITH UNSTRUCTURED DATA SOURCES
A non-transitory computer readable storage medium with instructions executed by a processor maintains a collection of data access connectors configured to access different sources of unstructured data. A user interface with prompts for designating a selected data access connector from the data access connectors is supplied. Unstructured data is received from the selected data access connector. Numeric vectors characterizing the unstructured data are created from the unstructured data. The numeric vectors are stored and indexed.
VIDEO PROCESSING OPTIMIZATION AND CONTENT SEARCHING
Techniques are disclosed for automatic scene detection and character extraction. In one example, audiovisual content with video frames, an audio recording, and timing information is received. A score, based on the frame's visual characteristics, is determined for a first frame and subsequent frames. The first frame's score and subsequent frame's scores are compared to determine if the difference between the scores is above a threshold. When the difference in scores is above a threshold, the subsequent frame is classified as a new scene. The audiovisual content is segmented into scenes and textual characters are identified in at least one frame from each scene. The characters are stored and indexed in a searchable database with the timing information for the scene where the characters were identified. The audio recording is transcribed and the transcribed words are stored and indexed in the searchable database with timing information.