Patent classifications
G11B27/34
Digital jukebox device with improved user interfaces, and associated methods
Certain exemplary embodiments relate to entertainment systems and, more particularly, to systems that incorporate digital downloading jukebox features and improved user interfaces. For instance, a smart search may be provided, e.g., where search results vary based on the popularity of songs within the venue, in dependence on songs being promoted, etc. As another example, a tile-based approach to organizing groupings of songs is provided. Groupings may involve self-populating collections of songs that combine centrally-promoted songs, songs in a given genre that are popular across an audiovisual distribution network, and songs that are locally popular and match up with the given genre (e.g., because of shared attributes such as same or similar genre, artist, etc.). Different tile visual presentations also are contemplated, as are different physical jukebox designs. In certain example embodiments, a sealed core unit with the “brains” of the jukebox is insertable into a docking station.
Adapting runtime and providing content during an activity
Methods and systems are described for identifying and adapting the playback speed of content to be provided during an activity. The methods and systems receive an input including a start cue indicating a start of an activity and access an average duration and an intensity score for the activity. Then the system calculates an adjusted average runtime for the activity based on the average duration and the intensity score and identifies one or more content items, the one or more content items having a total runtime equivalent to the adjusted average runtime for the activity. The system adjusts the playback speed of the identified one or more content items such that the total runtime of playback of the one or more content items matches the average duration for the activity and provides the one or more content items for consumption.
Adapting runtime and providing content during an activity
Methods and systems are described for identifying and adapting the playback speed of content to be provided during an activity. The methods and systems receive an input including a start cue indicating a start of an activity and access an average duration and an intensity score for the activity. Then the system calculates an adjusted average runtime for the activity based on the average duration and the intensity score and identifies one or more content items, the one or more content items having a total runtime equivalent to the adjusted average runtime for the activity. The system adjusts the playback speed of the identified one or more content items such that the total runtime of playback of the one or more content items matches the average duration for the activity and provides the one or more content items for consumption.
SYSTEM AND METHOD FOR GENERATING AND EDITING A VIDEO
The invention provides a system and a computer-implemented method for generating and editing a video including providing a mobile communication device comprising a camera, a display, a central processing unit (CPU), a video generating application and a memory. Next, starting the video generating application, and then opening the camera and providing camera tutorials. The camera tutorials comprise instructions for camera positioning, camera moving, and camera aligning while taking videos. Next, taking videos of a scene following the instructions for camera positioning, camera moving, and camera aligning while taking videos. Next, uploading the videos to the memory, editing the videos and producing a composite video for the scene. The camera tutorials include a “moving forward/backward” tutorial directing a user first to hold the camera still, to align a horizontal view line in the display with a marker line, and then to move the user's body forward or backward while taking a video of the scene. The editing of the videos includes slowing the videos down, and matching rhythm of music accompanying each video to transitions of consecutive videos. The slowing down of the videos includes removing every other frame.
SYSTEM AND METHOD FOR GENERATING AND EDITING A VIDEO
The invention provides a system and a computer-implemented method for generating and editing a video including providing a mobile communication device comprising a camera, a display, a central processing unit (CPU), a video generating application and a memory. Next, starting the video generating application, and then opening the camera and providing camera tutorials. The camera tutorials comprise instructions for camera positioning, camera moving, and camera aligning while taking videos. Next, taking videos of a scene following the instructions for camera positioning, camera moving, and camera aligning while taking videos. Next, uploading the videos to the memory, editing the videos and producing a composite video for the scene. The camera tutorials include a “moving forward/backward” tutorial directing a user first to hold the camera still, to align a horizontal view line in the display with a marker line, and then to move the user's body forward or backward while taking a video of the scene. The editing of the videos includes slowing the videos down, and matching rhythm of music accompanying each video to transitions of consecutive videos. The slowing down of the videos includes removing every other frame.
Simplifying digital content layers of an editing sequence
Embodiments are disclosed for simplifying digital content layers of an editing sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an editing sequence and a configuration for modifying the editing sequence, the editing sequence including a set of video layers, wherein a rendering of the set of video layers by a rendering engine produces a rendered video sequence, analyzing the set of video layers, including video segments present on the set of video layers, determining a first subset of the video segments present on the set of video layers relevant to the rendering of the set of video layers, determining modifications to the set of video layers of the editing sequence based on the determined first subset of the video segments and the received configuration, and automatically applying the determined modifications to the set of video layers of the editing sequence.
Simplifying digital content layers of an editing sequence
Embodiments are disclosed for simplifying digital content layers of an editing sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an editing sequence and a configuration for modifying the editing sequence, the editing sequence including a set of video layers, wherein a rendering of the set of video layers by a rendering engine produces a rendered video sequence, analyzing the set of video layers, including video segments present on the set of video layers, determining a first subset of the video segments present on the set of video layers relevant to the rendering of the set of video layers, determining modifications to the set of video layers of the editing sequence based on the determined first subset of the video segments and the received configuration, and automatically applying the determined modifications to the set of video layers of the editing sequence.
Event/object-of-interest centric timelapse video generation on camera device with the assistance of neural network input
An apparatus including an interface and a processor. The interface may be configured to receive pixel data generated by a capture device. The processor may be configured to generate video frames in response to the pixel data, perform computer vision operations on the video frames to detect objects, perform a classification of the objects detected based on characteristics of the objects, determine whether the classification of the objects corresponds to a user-defined event and generate encoded video frames from the video frames. The encoded video frames may be communicated to a cloud storage service. The encoded video frames may comprise a first sample of the video frames selected at a first rate when the user-defined event is not detected and a second sample of the video frames selected at a second rate while the user-defined event is detected. The second rate may be greater than the first rate.
Event/object-of-interest centric timelapse video generation on camera device with the assistance of neural network input
An apparatus including an interface and a processor. The interface may be configured to receive pixel data generated by a capture device. The processor may be configured to generate video frames in response to the pixel data, perform computer vision operations on the video frames to detect objects, perform a classification of the objects detected based on characteristics of the objects, determine whether the classification of the objects corresponds to a user-defined event and generate encoded video frames from the video frames. The encoded video frames may be communicated to a cloud storage service. The encoded video frames may comprise a first sample of the video frames selected at a first rate when the user-defined event is not detected and a second sample of the video frames selected at a second rate while the user-defined event is detected. The second rate may be greater than the first rate.
Terminal, control method therefor, and recording medium in which program for implementing method is recorded
The present invention relates to: a terminal capable of setting a trigger area and outputting additional contents in response to an input of touching the set trigger area; and a control method therefor. The terminal according to the present invention can comprise a touch screen for displaying information and receiving a touch input, and a control unit for performing control so as to output main contents on the touch screen, set a trigger area linked to a plurality of additional contents on the main content, output a list of the plurality of additional contents linked to the trigger area in response to a touch input of touching the trigger area when an edit mode for allowing setting of the trigger area is terminated and a viewer mode is executed, and output first additional contents corresponding to the selected item when any one is selected from the list.