H04N21/42203

DISPLAY APPARATUS, DISPLAY METHOD, AND COMPUTER PROGRAM
20180013974 · 2018-01-11 · ·

Videos are to be displayed in parallel, without missing information or a decrease in efficiency in the usable display region.

The aspect ratio of the large screen of an information processing apparatus 100 is 16:9, which is compatible with a Hi-Vision video. In a case where the large screen is used in a portrait layout, if the large screen is divided into three small screens in the vertical direction, the aspect ratio of the small screens after the dividing is 9:16/3=16:9.48. With respect to the original video content at 16:9, the ratio in inches is 9/16=56.25% (the area ratio is (9/16).sup.2=31.64%). Accordingly, the usable display region can be efficiently used.

EXTERNAL EXTENDED DEVICE AND AUDIO PLAYBACK METHOD
20180012581 · 2018-01-11 ·

An external extended device, an audio processing method and an audio playback method are provided. The external extended device is configured to receive power supplied and a signal transmitted by a television device and the external extended device includes: an integrated physical interface connecting the external extended device and the television device; and a sound-mixing processing chip electrically connected with the integrated physical interface and configured to: acquire an accompaniment audio signal transmitted by the television device via the integrated physical interface; perform sound-mixing processing on a user voice signal gathered by a microphone device and the accompaniment audio signal; and transmit a sound-mixed signal to a power amplifier circuit of the television device via the integrated physical interface.

Method and apparatus for peripheral context management

The present disclosure relates to a method and system for presenting a set of control functions via an interface of a peripheral control device (PCD). A control function can include a command associated with one or more media contexts of a host media device. The method decodes a payload, from the host media device, with an encoded context identifier, where the context identifier indicates a primary media context active on the host media device. The method determines one or more control functions corresponding to the context identifier, and changes the set of control functions on the interface of the PCD to include the one or more control functions that can command the primary media context.

Learning activity duration for providing content during activity
11711583 · 2023-07-25 · ·

Methods and systems are described for recognizing an activity and providing content for consumption during the activity. The methods and systems use an activity engine to learn a duration of an activity by receiving input with a start cue indicating a start of an activity and receiving input with a stop cue indicating an end of the activity. The activity engine determines an average or estimated duration for the activity based on the time difference between the start cue and the stop cue so that when the activity engine receives a third input and identifies the start cue, a content curation engine identifies one or more content items with a total runtime substantially similar to the average or estimated duration for the activity and provides the content for consumption.

SYSTEM AND METHOD FOR USER MONITORING AND INTENT DETERMINATION
20230007341 · 2023-01-05 ·

Sensing interfaces associated with a home entertainment system are used to automate a system response to events which occur in a viewing area associated with the home entertainment system. Data derived from such sensing interfaces may also be used to enhance the response readiness of one or more system components. Still further, user presence data derived from such sensing interfaces may be used to capture and report user viewing habits and/or preferences.

MIXED REALITY VIRTUAL REVERBERATION
20230007332 · 2023-01-05 ·

A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.

AUDIOVISUAL COLLABORATION SYSTEM AND METHOD WITH LATENCY MANAGEMENT FOR WIDE-AREA BROADCAST AND SOCIAL MEDIA-TYPE USER INTERFACE MECHANICS

Techniques have been developed to facilitate the livestreaming of group audiovisual performances. Audiovisual performances including vocal music are captured and coordinated with performances of other users in ways that can create compelling user and listener experiences. For example, in some cases or embodiments, duets with a host performer may be supported in a sing-with-the-artist style audiovisual livestream in which aspiring vocalists request or queue particular songs for a live radio show entertainment format. The developed techniques provide a communications latency-tolerant mechanism for synchronizing vocal performances captured at geographically-separated devices (e.g., at globally-distributed, but network-connected mobile phones or tablets or at audiovisual capture devices geographically separated from a live studio).

METHOD AND APPARATUS FOR SHARED VIEWING OF MEDIA CONTENT

In systems and methods for enhancing group watch experiences, a first user's reaction is detected using multiple sensors, e.g., at least one camera and a microphone, and may be combined with context information to determine an action to perform at user equipment devices of other users participating in the group watch to convey the first user's reaction. Images from the at least one camera can be used to determine a portion of the screen to which the user's reaction is directed and/or another user to whom the reaction is directed. The reaction may be conveyed using one or more of an audio effect, a visual effect, haptic effect or text, e.g., to highlight the determined portion or user, display an icon and/or output an audio or video clip. A signal for providing haptic feedback may be transmitted to the user equipment device of the determined user.

Systems and methods for virtual interactions
11570012 · 2023-01-31 ·

Systems and methods for virtual interactions are described. One or more users can view or listen to media, react to the media and share such media experience virtually with others. The media experience can take place synchronously, asynchronously or both.

Method for synchronizing an additional signal to a primary signal
11570506 · 2023-01-31 · ·

The present invention relates to a method for synchronizing an additional signal to a primary signal. Synchronization information for a primary signal is generated by extracting at least one signal feature sequence of the primary signal and comparing it to DB feature sequences stored in a database. If the signal feature sequence matches one of the DB feature sequences to a predetermined degree, then synchronization information of the matching DB feature sequences is allocated to the primary signal at a position specified by the signal feature sequence. The synchronization information is transmitted to a playback device, which outputs an additional signal to the primary signal based on the synchronization information.