Patent classifications
G11B27/005
Recording and playing video using orientation of device
A method and system for recording and playing a video are provided. The method includes receiving an input, from a first user, to start recording of the video. An orientation of a recording device is detected for a plurality of frames while recording the video. An input to stop recording of the video is then received. The video is stored in a video file and the orientation of the recording device for the plurality of frames is stored as metadata associated with the video file. The playing includes receiving an input to play the video from a second user. An orientation of a playing device is detected. The video file is accessed, and the video is played using the metadata, the input received from the second user, and the orientation of the playing device. The video file is used to control speed and direction of the video.
Wearable audio device with user own-voice recording
Various implementations include wearable audio devices configured to record a user's voice without recording other ambient acoustic signals, such as others talking nearby. In some particular aspects, a wearable audio device includes: a frame for contacting a head of a user; an electro-acoustic transducer within the frame and configured to output audio signals; at least one microphone; a voice activity detection (VAD) accelerometer; and a controller coupled with the electro-acoustic transducer, the at least one microphone and the VAD accelerometer, the controller configured in a first mode to: detect that the user is speaking; and record a voice of the user solely with signals from the VAD accelerometer in response to detecting that the user is speaking.
SELECTING SUPPLEMENTAL AUDIO SEGMENTS BASED ON VIDEO ANALYSIS
Aspects of the present application correspond to generation of supplemental content based on processing information associated with content to be rendered. More specifically, aspects of the present application correspond to the generation of audio track information, such as music tracks, that are created for playback during the presentation of video content. Illustratively, one or more frames of the video content are processed by machine learned algorithm(s) to generate processing results indicative of one or more attributes characterizing individual frames of video content. A selection system can then identify potential music track or other audio data in view of the processing results.
Systems and methods for presenting auxiliary video relating to an object a user is interested in when the user returns to a frame of a video in which the object is depicted
Systems and methods are described herein for a media guidance application that detects, and responds to, a user's review of video content on a media device. The media guidance application detects a rewind operation during playback of a video comprising a media asset. In response, the media guidance application determines if the playback position reached during the rewind operation occurs during a first break in the media asset and, if so, identifies objects depicted in the video at the playback position, and presents auxiliary video relating to an object at a second break in the media asset.
Detecting loss of attention during playing of media content in a personal electronic device
A mobile device has at least one media output device for presenting first media content, at least one communication interface that receives signals/data indicative of whether a consumer of the first media content is actively consuming the first media content; and at least one processor executing program code, which enables the mobile device to: during playing of a first media content, receive, via the communication interface, the first data; evaluate, based on a comparison of the first data with comparative data, whether the first data indicates that the consumer is not paying attention to the first media content; and in response to the consumer not paying attention, identify, by the processor, based on a time of receipt of the first data, a first time corresponding to a location within the first media content at which the consumer stopped paying attention during the playback of the first media content.
METHOD AND APPARATUS FOR PRESENTING AUDIOVISUAL WORK, DEVICE, AND MEDIUM
This application provides a method and apparatus for presenting an audiovisual work, a device, and a medium, and belongs to the field of artificial intelligence. The method includes: displaying a map control of the audiovisual work, the map control displaying a map associated with a plot of the audiovisual work; determining a marker point on the map in response to a location marking operation on the map; and presenting a plot clip in the audiovisual work that corresponds to the marker point.
Movie advertising playback systems and methods
An ad in a movie can be a static ad having a position in the movie that cannot be moved, or a dynamic ad having a position in the movie that can be changed. When a viewer wishes to skip a portion of the movie containing the ad, the playback system determines whether the ad is static or dynamic. If the ad is static, only the portion of the movie preceding the static ad can be skipped; the ad is unskippable. This technique is referred to as “bounceback” since the end of the skip bounces back to the start of the static ad. If the ad is dynamic, it is moved to after the end of the skip. This technique is referred to as “slip-ad” since the ad slips to later in the movie. When a movie has multiple ads, some can be static and some can be dynamic.
Programmatically controlling media content navigation based on corresponding textual content
A method, computer system, and a computer program product for content navigation within a media player is provided. The present invention may include displaying, by a computing device, a media content and a corresponding textual content. The present invention may include receiving, from a user, input regarding the textual content. The present invention may include modifying a playback of the media content based upon the input regarding the textual content to generate a modified media content. The present invention may include playing the modified media content.
Systems and methods of transitioning between video clips in interactive videos
Systems and methods described in this application are directed to creating interactive stories using static 360-degree environments with dynamic sprites displayed thereon. These techniques facilitate creation of gamified storytelling and improve on prior efforts to create an immersive experience. Some embodiments described in this application are directed to creating video clips with assistance to improve continuity between clips while other embodiments are directed to transitioning between those clips in the course of presenting an interactive story to a user.
VIDEO DATA STREAM, VIDEO ENCODER, APPARATUS AND METHODS FOR A HYPOTHETICAL REFERENCE DECODER AND FOR OUTPUT LAYER SETS
An apparatus for receiving a video data stream as an input bitstream is provided. The video data stream has a video encoded thereinto, wherein, for a playback speed modification factor (Nx), the apparatus is to determine a decoding capability requirement depending on a decoding capability requirement limit, wherein the playback speed modification factor (Nx) is a forward playback speed modification factor or is a backward playback speed modification factor.