Patent classifications
G06F16/7867
AUGMENTED REALITY GUIDANCE OVERLAP
Embodiments of the present invention provide computer-implemented methods, computer program products and computer systems. Embodiments of the present invention can, in response to receiving a request, identify a core component from source material based on topic analysis. Embodiments of the present invention can then generate three-dimensional representations of physical core components associated with the request. Finally, embodiments of the present invention then render the generated three-dimensional representations of the physical core components over the physical core components.
Interactive entertainment content
A system can be configured to receive entertainment content requested by a user and identify content segments and content features from the entertainment content. The content segments can be utilized to identify portions of the entertainment content for enrichment and/or enhancement by the system. The content features can be utilized to associate the entertainment content and the content segments with supplemental content that includes or is associated with the content features. The content features can indicate genres, scene classifications, significant figures credited with creating the entertainment content, and other points of interests for users interested in the entertainment content. The associations between the entertainment content and the supplemental content can enable the system to engage the users by presenting the supplemental content determined to match interests of the users.
Systems and methods for video archive and data extraction
Systems and methods for full motion video search are provided. In one aspect, a method includes receiving one or more search terms. The search terms include one or more of a characterization of the amount of man-made features in a video image and a characterization of the amount of natural features in the video image. The method further includes searching a full motion video database based on the one or more search terms.
On demand visual recall of objects/places
Aspects of the subject disclosure may include, for example, observing a plurality of objects viewed through a smart lens, wherein the plurality of objects are in a frame of an image viewed by the smart lens, determining an identification for an object of the plurality of objects, assigning tag information for the object based on the identification, storing the tag information for the object and the frame in which the object was observed, receiving a recall request for the object, retrieving the tag information for the object and the frame responsive to the receiving the recall request, and displaying the tag information and the frame. Other embodiments are disclosed.
Imaging device, video retrieving method, video retrieving program, and information collecting device
A drive recorder according to an embodiment of the present disclosure includes: an imaging unit that is mounted on a vehicle and captures a video of the surroundings of the vehicle; a video recording unit that has, recorded therein, video data captured; a network connecting unit that receives accident information including a time and date when an accident occurred and a place where the accident occurred; and a video retrieving unit that determines whether any video data captured in a predetermined time period and in a predetermined region are available in the video data recorded in the video recording unit, the predetermined time period including the time and date when the accident occurred, the predetermined region including the place where the accident occurred.
Methods, systems, and media for associating scenes depicted in media content with a map of where the media content was produced
Methods, systems, and media for associating scenes depicted in media content with a map of where the media content was produced are provided. In some embodiments, a method for presenting map information with video information is provided, the method comprising: receiving a request for a video from a user device; determining if there is location information associated with portions of the video; in response to determining that there is location information associated with the video, causing first map information corresponding to the location information to be presented in a first format during presentation of the video; receiving an indication that the first map information has been selected; in response to receiving the indication, causing second map information corresponding to the portion of the video that was being presented to be presented by the user device, wherein the second map information is presented in a second format.
METHOD OF IDENTIFYING AN ABRIDGED VERSION OF A VIDEO
A computer-implemented method of identifying whether a target video comprises an abridged version of a reference video includes evaluating condition a) that the target video does not comprise all shots of the reference video; condition b) that the target video includes groups of consecutive shots also included in the reference video; and condition c) that all shots which are present in both the target video and the reference video are in the same order. The method further includes identifying whether the target video comprises an abridged version of the reference video; and outputting a result of the identifying. The target video is identified as comprising an abridged version of the reference video on condition that conditions a), b) and c) are met. Also provided is a data processing apparatus for performing the method; and a computer program and computer readable storage medium comprising instructions to perform the method.
VIDEO PROCESSING METHOD, VIDEO SEARCHING METHOD, TERMINAL DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A video processing method, comprising: according to the scenario, editing a video to be edited, and obtaining a target video (S100); acquiring feature parameters of the target video (S200); generating, according to the feature parameters, a keyword of the target video (S300); and associatively storing the keyword and the target video (S400).
VIDEO SEARCH SYSTEM, VIDEO SEARCH METHOD, AND COMPUTER PROGRAM
A video search system includes: an object tag acquisition unit that obtains an object tag associated with an object that appears in a video; a search query acquisition unit that obtains a search query; a similarity calculation unit that calculates a similarity degree between the object tag and the search query; and a video search unit that searches for a video corresponding to the search query on the basis of the similarity degree. According to such a video search system, it is possible to properly recognize the video, for example, by using the search query using a natural language.
System, device, and method for generating and utilizing content-aware metadata
System, device, and method for generating and utilizing content-aware metadata, particularly for playback of video and other content items. A method includes: receiving a video file, and receiving content-aware metadata about visual objects that are depicted in said video file; and dynamically adjusting or modifying playback of that video file, on a video playback device, based on the content-aware metadata. The modifications include content-aware cropping, summarizing, watermarking, overlaying of other content elements, modifying playback speed, adding user-selectable indicators or areas around or near visual objects to cause a pre-defined action upon user selection, or other adjustments or modification. Optionally, a modified and content-aware version of the video file is automatically generated or stored. Optionally, the content-aware metadata is stored internally or integrally within the video file, in its header or as a private channel; or is stored in an accompanying file.