G11B27/323

APPARATUS FOR RECORDING DATA TO PRODUCE A LOCALIZED PANORAMIC IMAGE OF A STREET AND METHOD RELATED THERETO
20210266459 · 2021-08-26 ·

The invention relates to an apparatus for recording data to produce a localized panoramic image of a street with a camera and a device for satellite-based position and time determination as well as a storage unit. According to the invention a preparation device is provided which encodes time data of the device for satellite-based position and time determination into a format recordable for the camera and forwards this to the camera, wherein the camera is designed to simultaneously record these data and record a continuous film. The storage unit stores the film with the time data as well as position and time data of the device for satellite-based position and time determination. Furthermore, the invention relates to a method for this.

Systems and methods for media production and editing

The various embodiments disclosed herein relate to systems and methods for generating a derived media clip corresponding to a live event. In particular, the system comprises a processor configured to receive a plurality of content streams corresponding to the live event, each content stream corresponding to a content source. The processor is further configured to generate an annotated timeline for one or more of the plurality of content streams and receive a first user input requesting the derived media clip. The processor is then configured to generate the derived media clip based on the user input and the annotated timeline.

METHOD AND SYSTEM FOR SYNCHRONIZING PROCEDURE VIDEOS FOR COMPARATIVE LEARNING
20210006752 · 2021-01-07 ·

Embodiments described herein provide various examples of synchronizing the playback of a recorded video of a surgical procedure with a live video feed of a user performing the surgical procedure. In one aspect, a system can simultaneously receive a recorded video of a surgical procedure and a live video feed of a user performing the surgical procedure in a training session. More specifically, the recorded video is shown to the user as a training reference, and the surgical procedure includes a set of surgical tasks. The system next simultaneously monitors the playback of a current surgical task in the set of surgical tasks in the recorded video and the live video feed depicting the user performing the current surgical task. Next, the system detects that the end of the current surgical task has been reached during the playback of the recorded video. In response to determining that the user has not completed the current surgical task in the live video feed, the system pauses the playback of the recorded video while awaiting the user to complete the current surgical task.

ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

This invention provides an electronic apparatus which comprises a TIME-CODE terminal configured to output a time code signal which an external apparatus utilizes to perform synchronization related to a video, and a control unit configured to output a time code signal in which a bit in a predetermined field in the time code signal is set to a predetermined value, from the TIME-CODE terminal.

Method and system for synchronizing procedure videos for comparative learning

Embodiments described herein provide various examples of preparing two procedure videos, in particular two surgical procedure videos for comparative learning. In some embodiments, to allow comparative learning of two recorded surgical videos, each of the two recorded surgical videos is segmented into a sequence of predefined phases/steps. Next, corresponding phases/steps of the two segmented videos are individually time-synchronized in pair-wise manner so that a given phase/step of one recorded video and a corresponding phase/step of the other segmented video can have the same or substantially the same starting time and ending timing during comparative playbacks of the two recorded videos. The disclosed comparative-learning techniques can generally be applied to any type of procedure videos which can be broken down into a sequence of predefined phases/steps, and to synchronize/slave one such procedure video to another procedure video of the same type at each segmented phase/step in the sequence of predefined phases/steps.

Editing and tracking changes in visual effects
10783926 · 2020-09-22 ·

A method for determining edits of a subject video reel, comprising steps of opening an original EDL, reading every line of the original EDL, identifying event names representing each shot and identifying a source file. Each event includes at least a camera time code for the shot length, and a location time code indicating location of the shot in the source file. Locating events and picking up the in and out camera time codes from the shot names, noting shot names and camera times for shots found to have common in and out times, identifying every VFX shot and storing VFX names. Next, the software compares camera times for the shots in the first temporary file with camera times for the same shots in the second temporary file; preparing a result EDL file listing exclusively all VFX shots in which changes were found, and detailing the changes.

Automated video logging methods and systems
10616665 · 2020-04-07 ·

Exemplary embodiments of systems and methods are provided for automatically creating time-based video metadata for a video source and a video playback mechanism. An automated logging process can be provided for receiving a digital video stream, analyzing one or more frames of the digital video stream, extracting a time from each of the one or more frames analyzed, and creating a clock index file associating a time with each of the one or more analyzed frames. The process can further provide for parsing one or more received data files, extracting time-based metadata from the one or more parsed data files, and determining a frame of the digital video stream that correlates to the extracted time based metadata.

SYSTEMS AND METHODS FOR MEDIA PRODUCTION AND EDITING
20200027483 · 2020-01-23 ·

The various embodiments disclosed herein relate to systems and methods for generating a derived media clip corresponding to a live event. In particular, the system comprises a processor configured to receive a plurality of content streams corresponding to the live event, each content stream corresponding to a content source. The processor is further configured to generate an annotated timeline for one or more of the plurality of content streams and receive a first user input requesting the derived media clip. The processor is then configured to generate the derived media clip based on the user input and the annotated timeline.

Methods and systems for processing synchronous data tracks in a media editing system

A software architecture and framework based on plug-in software modules supports flexible handling of synchronous data streams by media production and editing applications. Plug-ins called by the applications convert data from synchronous data streams into a form that enables a user of such an application to view and edit time-synchronous data contained within such data streams. The synchronous data is displayed in a temporally aligned manner in a synchronous data track within a timeline display of the application user interface. In one example, closed caption data extracted from the ancillary portion of a video signal is displayed as text on a data track temporally synchronized with the source video track. Other plug-ins analyze media tracks to generate time-synchronous data which may also be displayed in a temporally aligned manner within a synchronous data track in a timeline.

SYSTEMS AND METHODS FOR MEDIA PRODUCTION AND EDITING

The various embodiments disclosed herein relate to systems and methods for generating a derived media clip corresponding to a live event. In particular, the system comprises a processor configured to receive a plurality of content streams corresponding to the live event, each content stream corresponding to a content source. The processor is further configured to generate an annotated timeline for one or more of the plurality of content streams and receive a first user input requesting the derived media clip. The processor is then configured to generate the derived media clip based on the user input and the annotated timeline.