Patent classifications
G06F16/748
Information processing apparatus, information processing method, and program for presenting reproduced video including service object and adding additional image indicating the service object
This information processing apparatus includes: a media reproduction unit that acquires and reproduces video data including a service object, for which a service that processes a request from a user through voice is available; and a controller that adds an additional image for informing the user about the service object to the reproduced video and saves identification information of the video data and information of a start time and an end time of the additional image, as a bookmark that is optionally selected by the user and is provided to a scene with the additional image.
Frictionless Authentication and Monitoring
An identity of a customer within an establishment is authenticated using a variety of captured biometric features obtained from sensors and/or video. Video capturing movements/interactions of the customer is analyzed in real time to identify the customer's behavior and actions. Any staff of the establishment who interact with the customer are identified from the video. Transaction data and other data retained for the customer by the establishment are aggregated and linked with the video and the customer identity. The linked data is analyzed in combination with the customer behavior and actions to determine responses within the establishment to customer-initiated transactions. In an embodiment, the customer is authorized to perform at least one transaction within the establishment based on the authenticated identity and linked data without a presentation by the customer of an identification card, a Personal Identification Number (PIN), a password and/or verification by a staff member.
Digital transport adapter
One or more computing devices may be configured to identify information corresponding to a program change request associated with a multi-program data transmission. The information may comprise at least a link to a desired program within the multi-program data transmission. The one or more computing devices may communicate the link to the desired program to a client device over a specified time period. After the time period, the one or more computing devices may communicate the desired program to the client device using a single program data transmission. The single program data transmission may be derived from the multi-program data transmission.
Using manifest files to determine events in content items
Systems, methods, apparatuses are described for monitoring events in a plurality of different services. A system may monitor manifest files for one or more content items. Manifest files may contain manifest file tags indicating events and insertion opportunities. Events and/or insertion opportunities may be detected, and a switch from one content item to another content item, based on customized user priority preferences, may be caused.
Text-Driven Editor for Audio and Video Assembly
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
TEXT-DRIVEN EDITOR FOR AUDIO AND VIDEO ASSEMBLY
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be re-ordered and edited, as desired, to produce a finished video program for export.
Text-Driven Editor for Audio and Video Editing
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
TEXT-DRIVEN EDITOR FOR AUDIO AND VIDEO ASSEMBLY
The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
SYSTEM AND METHOD PROVIDING A REMOTE VIDEO-BASED INTERVIEW
A system and method for providing an online introductory video search and hiring app utilizes a web server. The web server is communicatively interconnected to an applicant computing device and an employer computing device over the Internet. The web server includes a searchable database, a memory having instructions stored thereon, and a processor configured to execute the instructions on the memory causing the web server to perform the method. The method receives applicant profile data from the applicant computing device, receives an introductory video from the applicant computing device, the introductory video promoting themselves as a potential employee, stores the applicant profile data and an introductory video link to the introductory video into a user record within the searchable database, stores the introductory video into a storage location accessible by the introductory video link, receive a search query from the employer computing device, generates search results based upon the search query submission to the searchable database, retrieves the applicant profile data from the searchable database and the introductory video accessed by the introductory video link when the search results contains a match, and transmits the applicant profile data and the introductory video to the employer computing device.
VIDEO PREVIEWS FOR INTERACTIVE VIDEOS USING A MARKUP LANGUAGE
A device configured to display a first video scene and a progress bar and to receive a user input that indicates a time instance value on the progress bar. The device is further configured to identify a first source scene identifier for a second video scene and an animation identifier that is linked with the second video scene based on the time instance value. The device is further configured to identify computer programming code that is associated with the first source scene identifier and the first animation identifier and to compile the identified computer programming code to render the second video scene. The device is further configured to generate a scaled second video scene by reducing a size of the rendered second video scene to fit a preview frame and to display the scaled second video scene in the preview frame.