Patent classifications
G06F16/745
System and method for selecting scenes for browsing histories in augmented reality interfaces
Information may be provided for review by an augmented reality (AR) user. A system may store a plurality of scenes viewed by the AR user, and for each scene information identifying (i) any pairs of real-life objects and AR augmentations presented in the scene and (ii) any pairs in the scene with which the user interacted. The system may also select from the stored plurality of scenes a first subset of scenes such that each pair with which the user interacted is presented in at least one scene in the first subset of scenes. Responsive to an instruction from the user to review historical AR information, the system may present to the user the selected first subset of scenes. The first subset may be selected such that each pair with which the user interacted is presented in at least one scene in which the user interacted with that pair.
Methods, systems, and media for indicating viewership of a video
Methods, systems, and media for indicating viewership of a video are provided. In some embodiments, the method comprises: identifying a video; identifying a first group of users; determining an affinity score for each user with the identified video; receiving a request to present a page that includes a representation of the video; identifying a second group of users connected to the first user associated with the user device; determining a viewership status of each user in the second group of users corresponding to the video; identifying a subset of users in the second group of users based at least in part on the viewership status; and causing groups of indicators to be presented on the user device, wherein each indicator in the groups of indicators represents the viewership status of the user, and wherein the indicators are presented on the requested page in connection with the representation of the video.
User interface for labeling, browsing, and searching semantic labels within video
A system for browsing, searching and/or viewing video content includes at least one user device and a server computer operably connected to the at least one user device. The server computer includes at least one processor operably connected to an electronic storage device, and the at least one processor is programmed with computer program instructions that, when executed, cause the server computer to present a first video on a user interface to the at least one user device, wherein the user interface presents scenes of the first video and semantic labels associated with the scenes of the first video, and wherein the user interface further presents confidence parameters associated with the scenes of the first video and the semantic labels. The server computer also obtains, during presentation of a first scene of the first video, a selection of a semantic label from a user of the at least one user device, then causes, during the presentation of the first scene of the first video, a jump from the first scene to a second scene of the first video based on the selection of the semantic label, the second scene being associated with the selected semantic label, and the jump from the first scene to the second scene causing the second scene to be presented on the user interface, and then updates the presentation of the semantic labels and the confidence parameters based on the jump from the first scene to the second scene such that the updated presentation of the semantic labels and the confidence parameters on the user interface are associated with the second scene.
DISPLAY CONTROL DEVICE, SURVEILLANCE SUPPORT SYSTEM, DISPLAY CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
A display control device (10) includes an acquisition unit (11), a control unit (13), and an output unit (18). The acquisition unit (11) acquires video data captured by each of a plurality of image capturing devices from each of the plurality of image capturing devices. In response to detection of a target state of a monitoring target from each of two or more pieces of video data among a plurality of pieces of the video data, the control unit (13) allocates a time order of output to a display device to each of the target video data being the video data in which the target state is detected. The output unit (18) sequentially outputs the target video data to the display device, based on the allocated time order.
Query system with spoiler prevention
Systems and methods for generating a reply to a query are provided. A query about an event in a content recording during playback of the content recording is received. A type of the event based on the query is determined. A playback position in a timeline of the content recording is determined. Based on the type of the event, an event distribution table is obtained, the table comprising one or more event identifiers and one or more corresponding occurrence times for the one or more event identifiers in the timeline of the content recording. The playback position of the content recording to the one or more occurrence times is compared. A reply to the query is generated, for aural or visual presentation, the reply being based on a result of the comparing, the reply comprising data about at least one event corresponding to the one or more event identifiers.
Method and system for providing segment-based viewing of recorded sessions
An approach for providing segment-based viewing of recorded sessions is described. A video platform may determine one or more segments of a communication session based on content of the communication session. The video platform may also associate one or more segments with a recording of the communication session. The video platform may cause, at least in part, a presentation of the recording and one or more indicators for navigating playback of the recording based on the one or more segments, wherein the one or more indicators correspond to the one or more segments.
Electronic device and control method
Disclosed are an artificial intelligence (AI) system using a machine learning algorithm such as deep learning, and an application thereof. The present disclosure provides an electronic device comprising: an input unit for receiving content data; a memory for storing information on the content data; an audio output unit for outputting the content data; and a processor, which acquires a plurality of data keywords by analyzing the inputted content data, matches and stores time stamps, of the content data, respectively corresponding to the plurality of acquired keywords, based on a user command being inputted, searches for a data keyword corresponding to the inputted user command among the stored data keywords, and plays the content data based on the time stamp corresponding to the searched data keyword.
METHODS, SYSTEMS, AND MEDIA FOR INDICATING VIEWERSHIP OF A VIDEO
Methods, systems, and media for indicating viewership of a video are provided. In some embodiments, the method comprises: identifying a video; identifying a first group of users; determining an affinity score for each user with the identified video; receiving a request to present a page that includes a representation of the video; identifying a second group of users connected to the first user associated with the user device; determining a viewership status of each user in the second group of users corresponding to the video; identifying a subset of users in the second group of users based at least in part on the viewership status; and causing groups of indicators to be presented on the user device, wherein each indicator in the groups of indicators represents the viewership status of the user, and wherein the indicators are presented on the requested page in connection with the representation of the video.
Mobile terminal controlling method thereof, and recording medium thereof
A mobile terminal, controlling method thereof and recording medium thereof are disclosed, by which video contents can be efficiently edited. The present invention includes a touchscreen configured to display a video content and a controller controlling a progress bar for the video content to be displayed on the touchscreen, the controller controlling a 1.sup.st time indicator and a 2.sup.nd time indicator to be displayed on the progress bar, the controller controlling a 1.sup.st scene at a 1.sup.st time corresponding to the 1.sup.st time indicator and a 2.sup.nd scene at a 2.sup.nd time corresponding to the 2.sup.nd time indicator to be displayed on the touchscreen.
System and Method for Automated Video Editing
A system and method for automated video editing. A reference media is selected and analyzed. At least one video may be acquired, and thereby synced to the reference audio media. Once synced, audio analysis is used to assemble an edited video. The audio analysis can include information, including user inputs, video analysis, and metadata. The system and method for automated video editing may be applied to collaborative creation, simulated stop motion animation, and real-time implementations.