Patent classifications
G06F16/638
Intelligent media queue
Systems, methods, and non-transitory computer-readable storage media for intelligently managing a playlist of digital media provide an intelligent dynamic queue that is configured to manage the playback of digital media. The queue can transition between passive playback mode, active playback mode, and mixed playback mode. The queue can handle the playback of the songs in the queue according to the playback mode and/or a queue status field that is associated with each song in the queue.
Synchronizing playback by media playback devices
Example systems, apparatus, and methods receive audio information including a plurality of frames from a source device, wherein each frame of the plurality of frames includes one or more audio samples and a time stamp indicating when to play the one or more audio samples of the respective frame. In an example, the time stamp is updated for each of the plurality of frames using a time differential value determined between clock information received from the source device and clock information associated with the device. The updated time stamp is stored for each of the plurality of frames, and the audio information is output based on the plurality of frames and associated updated time stamps. A number of samples per frame to be output is adjusted based on a comparison between the updated time stamp for the frame and a predicted time value for play back of the frame.
Synchronizing playback by media playback devices
Example systems, apparatus, and methods receive audio information including a plurality of frames from a source device, wherein each frame of the plurality of frames includes one or more audio samples and a time stamp indicating when to play the one or more audio samples of the respective frame. In an example, the time stamp is updated for each of the plurality of frames using a time differential value determined between clock information received from the source device and clock information associated with the device. The updated time stamp is stored for each of the plurality of frames, and the audio information is output based on the plurality of frames and associated updated time stamps. A number of samples per frame to be output is adjusted based on a comparison between the updated time stamp for the frame and a predicted time value for play back of the frame.
Systems and methods for automatic mixing of media
A first device includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for receiving, from a second device, audio mix information for a first audio item and receiving, from the second device, an indication that the first audio item is to be mixed with a second audio item distinct from the first audio item. In response to the indication, the one or more programs include instructions for transmitting to the second device an audio stream including the first audio item and the second audio item mixed in accordance with the audio mix information.
Digital jukebox device with improved user interfaces, and associated methods
Certain exemplary embodiments relate to entertainment systems and, more particularly, to systems that incorporate digital downloading jukebox features and improved user interfaces. For instance, a smart search may be provided, e.g., where search results vary based on the popularity of songs within the venue, in dependence on songs being promoted, etc. As another example, a tile-based approach to organizing groupings of songs is provided. Groupings may involve self-populating collections of songs that combine centrally-promoted songs, songs in a given genre that are popular across an audiovisual distribution network, and songs that are locally popular and match up with the given genre (e.g., because of shared attributes such as same or similar genre, artist, etc.). Different tile visual presentations also are contemplated, as are different physical jukebox designs. In certain example embodiments, a sealed core unit with the “brains” of the jukebox is insertable into a docking station.
User-Configured Music Room Digital Assets in Virtual Environments
Managing access to digital content in a virtual environment using virtual content rights, including: providing a virtual content rights database comprising data associating a user of the virtual environment with the virtual content rights acquired with respect to the digital content; receiving, at a processor, a request from a device of the user for assignment of the virtual content rights of the digital content, wherein the user uses the device to interface with the processor; updating the virtual content rights database to indicate the assignment of the virtual content rights to the user; receiving, at the processor, data from the device of the user holding the virtual content rights to digital content including songs to create a virtual user-configured music room having at least one of the songs; and updating the virtual content rights database to indicate sharing of the virtual user-configured music room by the user within the virtual environment.
Transferring playback from a mobile device to a playback device
A network device is configured to (i) play back a media item indicated by a remote playback queue provided by a cloud-based computing system, (ii) receive an indication that a playback device is available for playback, (iii) display a now playing screen including (a) information identifying the media item, and (b) an icon that indicates that the network device is not in a connected state with any other network device, (iv) receive a first input selecting the icon, (v) in response to the first input, display a list of one or more available network devices including the playback device, (vi) receive a second input selecting the playback device from the list (vii) after receiving the second input, update the list to indicate that the playback device is selected for playback of the remote playback queue, and (viii) transfer playback of the remote playback queue from the network device to the playback device.
Personal Voice-Based Information Retrieval System
The present invention relates to a system for retrieving information from a network such as the Internet. A user creates a user-defined record in a database that identifies an information source, such as a web site, containing information of interest to the user. This record identifies the location of the information source and also contains a recognition grammar based upon a speech command assigned by the user. Upon receiving the speech command from the user that is described within the recognition grammar, a network interface system accesses the information source and retrieves the information requested by the user.
Personalizing explainable recommendations with bandits
Methods, systems and computer program products are provided personalizing recommendations of items with associated explanations. The example embodiments described herein use contextual bandits to personalize explainable recommendations (“recsplanations”) as treatments (“Bart”). Bart learns and predicts satisfaction (e.g., click-through rate, consumption probability) for any combination of item, explanation, and context and, through logging and contextual bandit retraining, can learn from its mistakes in an online setting.
Phonetic comparison for virtual assistants
In an approach for optimizing an intelligent virtual assistant by using phonetic comparison to find a response stored in a local database, a processor receives an audio input on a computing device. A processor transcribes the audio input to text. A processor compares the text to a set of user queries and commands in a local database of the computing device using a phonetic algorithm. A processor determines whether a user query or command of the set of user queries and commands meets a pre-defined threshold of similarity. Responsive to determining that the user query or command meets the pre-defined threshold of similarity, a processor identifies an intention of a set of intentions stored in the local database corresponding to the user query or command. A processor identifies a response of a set of responses in the local database corresponding to the intention. A processor outputs the response audibly.