Patent classifications
G06F16/7837
GENERATING AUGMENTED REALITY IMAGES FOR DISPLAY ON A MOBILE DEVICE BASED ON GROUND TRUTH IMAGE RENDERING
Systems and methods are disclosed herein for monitoring a location of a client device associated with a transportation service and generating augmented reality images for display on the client device. The systems and methods use sensor data from the client device and a device localization process to monitor the location of the client device by comparing renderings of images captured by the client device to renderings of the vicinity of the pickup location. The systems and methods determine navigation instructions from the user's current location to the pickup location and select one or more augmented reality elements associated with the navigation instructions and/or landmarks along the route to the pickup location. The systems and methods instruct the client device to overlay the selected augmented reality elements on a video feed of the client device.
METHOD AND SYSTEM FOR AUTOMATIC PRE-RECORDATION VIDEO REDACTION OF OBJECTS
A system and a method for automatic video redaction are provided herein. The method may include: receiving, an input video comprising a sequence of frames captured by a camera, wherein the input video includes live video obtained directly from the camera, wherein recordation of the video directly from the camera is disabled; performing visual analysis of the input video, to detect portions of the frames of the input video in which one of a plurality of predefined objects or a descriptor thereof is detected; generating a redacted input video by replacing the portions of the frames with new portions of another visual content; and recording the redacted input video on a data storage device, wherein the generating of thethe redacted input video, is carried out by a computer processor, after the input video is captured by the camera and before the recording of the redacted input video on the data storage device.
APPARATUS OF SELECTING VIDEO CONTENT FOR AUGMENTED REALITY, USER TERMINAL AND METHOD OF PROVIDING VIDEO CONTENT FOR AUGMENTED REALITY
A video content selecting apparatus for augmented reality is provided. The apparatus includes a communication interface; and an operation processor configured to perform: (a) collect a plurality of video contents through the Internet; (b) extract feature information and metadata for each of the plurality of video contents, and generate a hash value corresponding to the feature information by using a predetermined hashing function; (c) manage a database to include at least the hash value and the metadata of each of the plurality of video contents; (d) receive object information corresponding to an object in a real-world environment from a user terminal through the communication interface; (e) search the database based on the object information and select a video content corresponding to the object information from among the plurality of video contents; and (f) transmit the metadata of the selected video content to the user terminal through the communication interface.
Systems and methods for video archive and data extraction
Systems and methods for full motion video search are provided. In one aspect, a method includes receiving one or more search terms. The search terms include one or more of a characterization of the amount of man-made features in a video image and a characterization of the amount of natural features in the video image. The method further includes searching a full motion video database based on the one or more search terms.
VIDEO SEARCH SYSTEM, VIDEO SEARCH METHOD, AND COMPUTER PROGRAM
A video search system includes: an object tag acquisition unit that obtains an object tag associated with an object that appears in a video; a search query acquisition unit that obtains a search query; a similarity calculation unit that calculates a similarity degree between the object tag and the search query; and a video search unit that searches for a video corresponding to the search query on the basis of the similarity degree. According to such a video search system, it is possible to properly recognize the video, for example, by using the search query using a natural language.
System, device, and method for generating and utilizing content-aware metadata
System, device, and method for generating and utilizing content-aware metadata, particularly for playback of video and other content items. A method includes: receiving a video file, and receiving content-aware metadata about visual objects that are depicted in said video file; and dynamically adjusting or modifying playback of that video file, on a video playback device, based on the content-aware metadata. The modifications include content-aware cropping, summarizing, watermarking, overlaying of other content elements, modifying playback speed, adding user-selectable indicators or areas around or near visual objects to cause a pre-defined action upon user selection, or other adjustments or modification. Optionally, a modified and content-aware version of the video file is automatically generated or stored. Optionally, the content-aware metadata is stored internally or integrally within the video file, in its header or as a private channel; or is stored in an accompanying file.
VIDEO PROCESSING APPARATUS, METHOD AND COMPUTER PROGRAM
A video processing apparatus configured to process a stream of video surveillance data, wherein the video surveillance data includes metadata associated with video data, the metadata describing at least one object in the video data. The apparatus comprises means for applying an image assessment algorithm to generate a reliability score for the metadata, and associating the reliability score with the metadata. The image assessment algorithm generates the reliability score based on an assessment of the image quality of the video data to which the metadata relates to indicate a likelihood that the metadata accurately describes the object. An image enhancement module applies image enhancement to video data if the reliability score of metadata associated with the video data indicates a low likelihood that the metadata accurately describes the object.
Time-series based analytics using video streams
Methods and systems for detecting and predicting anomalies include processing frames of a video stream to determine values of a feature corresponding to each frame. A feature time series is generated that corresponds to values of the identified feature over time. A matrix profile is generated that identifies similarities of sub-sequences of the time series to other sub-sequences of the feature time series. An anomaly is detected by determining that a value of the matrix profile exceeds a threshold value. An automatic action is performed responsive to the detected anomaly.
VOICE QUERY REFINEMENT TO EMBED CONTEXT IN A VOICE QUERY
Systems and methods are described for providing contextual search results. The system may receive a search query during presentation of a video. If the query is ambiguous, the system accesses some of the frames of the video. The frames are analyzed to identify a performed action depicted in the frames. The system retrieves a keyword related to the identified action.
The ambiguous query is augmented with the keyword. The augmented search query is used to search for and output relevant search results.
MEDIA FILE PROCESSING METHOD, DEVICE, READABLE MEDIUM, AND ELECTRONIC APPARATUS
A media file processing method includes: recognizing content features of a target media file, wherein the content features include an image feature and/or a sound feature; determining a target aggregation theme of the target media file according to the recognized content features of the target media file; determining the target media file as media files under the target aggregation theme; and synthesizing the media files under the target aggregation theme in response to a video clip instruction with respect to the target aggregation theme, to obtain a target video corresponding to the target aggregation theme.