Patent classifications
G11B27/105
Audio Processing Method and Device
An audio processing method implemented by an electronic device includes entering a multi-channel video recording mode, detecting a shooting operation of a user, simultaneously recording, after detecting the shooting operation, a first video image and a second video image using a first camera and a second camera, and recording audio of a plurality of sound channels, where the audio includes panoramic audio, first audio corresponding to the first video image, and second audio corresponding to the second video image. The electronic device further records the first audio based on a feature value such as a zoom magnification corresponding to the first display area.
Method and System for Exploring Similarities
A method and computer readable medium for exploring similar users and items of a media service includes generating a user interface. The user interface displays a user selectable indicia representing a similar member function for allowing a user to search a media service for at least one other user. The one other user has a degree of similarity with respect to the searching user. Another method includes facilitating the search of such a similar user within a media service.
Transferring Playback Between Devices
A network device is configured to (i) play back a media item indicated by a remote playback queue provided by a cloud-based computing system, (ii) receive an indication that a playback device is available for playback, (iii) display a now playing screen including (a) information identifying the media item, and (b) an icon that indicates that the network device is not in a connected state with any other network device, (iv) receive a first input selecting the icon, (v) in response to the first input, display a list of one or more available network devices including the playback device, (vi) receive a second input selecting the playback device from the list (vii) after receiving the second input, update the list to indicate that the playback device is selected for playback of the remote playback queue, and (viii) transfer playback of the remote playback queue from the network device to the playback device.
Method for controlling edit user interface of moving picture for clip alignment control and apparatus for the same
Disclosed herein is a video editing UI control apparatus. A video editing UI control apparatus according to the present disclosure may include: an editing UI display unit for visually displaying an editing UI comprising a play head and a clip movement control UI in a display unit; a user input confirmation unit for confirming user input information based on a user input that is provided through a touch input in the display device; and an editing UI processing unit for confirming an input of the clip movement control UI based on the user input information provided by the user input confirmation unit and for moving at least one clip to a reference time in which the play head is located.
QUEUE IDENTIFICATION
Example techniques relate to a playback device that facilitates queue identification. In an example implementation, a playback device receives, from a first controller, instructions representing a command to populate a playback queue with one or more first media items; the instructions are associated with a first application identifier corresponding to the first controller. Based on the received instructions, the playback device populates the playback queue with the one or more first media items and forms an association between the playback queue and the first application identifier. The playback device receives, from a second controller, instructions representing one or more first commands to access the playback queue; the instructions are associated with a second application identifier corresponding to the second controller. The playback device determines that the second application identifier is different from the first application identifier and denies the one or more first commands to access the playback queue.
SYSTEMS AND METHODS FOR PROVIDING AUDIO-FILE LOOP-PLAYBACK FUNCTIONALITY
Systems and methods for providing audio-file loop-playback functionality are provided. The system includes a processor that performs a method including setting a playback loop start-point based on a first selection of a button; setting a loop end-point, associating a loop with an audio file, and entering into the loop based on a second selection of the button; and exiting the loop based on a third selection of the button. Associating the loop with the audio file includes adding metadata to the audio file. The metadata associates the loop with a button. The method includes reentering the loop based on a fourth selection of the button and exiting the loop based on a fifth selection of the button.
Devices and Methods for Capturing and Interacting with Enhanced Digital Images
An electronic device having a camera, while displaying a live preview for the camera, detects activation of a shutter button at a first time. In response, the electronic device acquires, by the camera, a representative image that represents a first sequence of images, and a plurality of images after acquiring the representative image, and also displays an indication in the live preview that the camera is capturing images for the first sequence of images. The electronic device groups images acquired by the camera in temporal proximity to the activation of the shutter button at the first time into the first sequence of images, such that the first sequence of images includes a plurality of images acquired by the camera prior to detecting activation of the shutter button at the first time, the representative image, and the plurality of images acquired by the camera after acquiring the representative image.
JUKEBOX WITH CUSTOMIZABLE AVATAR
A digital downloading jukebox system including a mechanism for delivering custom services to a recognized user is provided. For example, information specific to a recognized user may be stored and optionally may include a recognized user avatar representative of the recognized user. The user avatar may be an image, video, and/or animation, which may be displayed on and/or played through the jukebox. The user avatar may be associated with transactions associated with the user. For example, an avatar may be displayed when a playlist of the recognized user is played, when a message is sent, etc. In other examples, the avatar may introduce instances of media by playing an audio and/or video message, and the avatar may sing, dance, etc. while an instance of media is playing.
SYSTEMS AND METHODS FOR PERFORMING AN ACTION BASED ON VIEWING POSITIONS OF OTHER USERS
Systems and methods for performing an action based on viewing positions of other users are provided. Viewing progress in a media asset of each of a plurality of users is retrieved. The viewing progress of each of the plurality of users is compared to identify a maximum viewing progress that is common to each of the plurality of users. A request from a user to access the media asset is received. A current viewing progress in the media asset of the user is monitored to determine when the current viewing progress of the user matches the identified maximum viewing progress that is common to each of the plurality of users. In response to determining that the current viewing progress of the user matches the identified maximum viewing progress, a message with an option to perform an action relative to the media asset is generated for display to the user.
AUDIENCE SEGMENTATION BASED ON VIEWING ANGLE OF A USER VIEWING A VIDEO OF A MULTI-ANGLE VIEWING ENVIRONMENT
Audience segmentation can be based on a viewing angle of a user viewing a video of a multi-angle viewing environment. During playback, a sequence of the user-controlled viewing angles of the video are recorded. The sequence represents the viewing angle of the user at a given point in time. Based on the sequences of several users, a predominant sequence of viewing angles of the video is determined. One or more audience segment tags are assigned to the predominant sequence of viewing angles. During subsequent playbacks of the video, the sequence(s) of user-controlled viewing angles of the video are recorded. The recorded sequence(s) of the subsequent user(s) are compared to the predominant sequence of viewing angles of the video, and the subsequent user(s) are assigned to an audience segment based on the comparison and the corresponding audience segment tags.