H04N5/60

Content presenting method, user equipment and system

A content presenting method includes: starting a 3D application, in response to an instruction, a 3D application presenting a simulated object of an end user and a virtual screen for displaying live content in a virtual environment; receiving a content source address of the live content that is provided by another user equipment on an anchor to a content providing server, and that is currently broadcasted on the content providing server; obtaining audio data and video data of the live content from the content providing server, based on the content source address; rendering the audio data and the video data to obtain video content and audio content; playing the audio content in the 3D application; obtaining content of interaction between the anchor of the video content displayed using the virtual screen and the simulated object; and displaying the video content and the content of interaction on the virtual screen.

DISPLAY APPARATUS AND CONTROLLING METHOD THEREOF
20230230538 · 2023-07-20 ·

A display apparatus is provided. The display apparatus includes a display panel comprising a plurality of pixels, a driver configured to drive the display panel, and at least one processor. The at least one processor may, based on receiving content comprising video content and audio content, obtain sound location information based on multi-channel information included in the audio content, identify one area of the video content corresponding to the obtained sound location information, and control the driver to adjust brightness of pixels included in the identified one area.

Media content presentation

A method of presenting media content is disclosed. A plurality of assets is received at a mobile device comprising a display and an orientation sensor. The plurality of assets comprises a first video asset associated with a first aspect ratio, and a second video asset associated with a second aspect ratio, different from the first aspect ratio. A desired aspect ratio is determined based on an output of the orientation sensor. In accordance with a determination that the desired aspect ratio is closer to the first aspect ratio than to the second aspect ratio, the first video asset is selected. In accordance with a determination that the desired aspect ratio is closer to the second aspect ratio than to the first aspect ratio, the second video asset is selected. The selected video is presented at the desired aspect ratio via the display.

Media content presentation

A method of presenting media content is disclosed. A plurality of assets is received at a mobile device comprising a display and an orientation sensor. The plurality of assets comprises a first video asset associated with a first aspect ratio, and a second video asset associated with a second aspect ratio, different from the first aspect ratio. A desired aspect ratio is determined based on an output of the orientation sensor. In accordance with a determination that the desired aspect ratio is closer to the first aspect ratio than to the second aspect ratio, the first video asset is selected. In accordance with a determination that the desired aspect ratio is closer to the second aspect ratio than to the first aspect ratio, the second video asset is selected. The selected video is presented at the desired aspect ratio via the display.

METHODS AND SYSTEMS FOR CONDITION MITIGATION
20230231980 · 2023-07-20 ·

Methods and systems are described for condition mitigation. A computing device may display content. The computing device may determine that displaying and/or outputting the content may impact a person with a condition. The computing device may take an action to reduce an impact of the content on the person.

METHODS AND SYSTEMS FOR CONDITION MITIGATION
20230231980 · 2023-07-20 ·

Methods and systems are described for condition mitigation. A computing device may display content. The computing device may determine that displaying and/or outputting the content may impact a person with a condition. The computing device may take an action to reduce an impact of the content on the person.

Wireless audio synchronization using a spread code

Disclosed herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for synchronizing playback of audiovisual content among multiple speakers. In some embodiments, a first smart speaker receives a spread spectrum signal from a second smart speaker over an audio data channel. The first smart speaker despreads the spread spectrum signal based on a spreading code. The first smart speaker determines a time of receipt of the spread spectrum signal based on the despreading. The first smart speaker receives a time of transmission of the spread spectrum signal. The first smart speaker then calculates a playback delay based on the time of receipt and the time of transmission. Then the first smart speaker controls the playback of the audiovisual content based on the playback delay.

Method of providing sound that matches displayed image and display device using the method

A method of providing sounds matching an image displayed on a display panel includes: calculating a first object in the image by analyzing digital video data corresponding to the image, and calculating first gain values based on a location of the first object, and applying first gain values to a plurality of sound data; displaying the image on the display panel based on the digital video data; and outputting the plurality of sounds by vibrating the display panel based on the plurality of sound data to which the first gain values applied, using a plurality of sound generating devices.

Artificial window system

In general, the present disclosure is directed to an artificial window system that can simulate the user experience of a traditional window in environments where exterior walls are unavailable or other constraints make traditional windows impractical. In an embodiment, an artificial window consistent with the present disclosure includes a window panel, a panel driver, and a camera device. The camera device captures a plurality of image frames representative of an outdoor environment and provides the same to the panel driver. A controller of the panel driver sends the image frames as a video signal to cause the window panel to visually output the same. The window panel may further include light panels, and the controller may extract light characteristics from the captured plurality of image frames to send signals to the light panels to cause the light panels to mimic outdoor lighting conditions.

SOUND SOURCE LOCALIZATION WITH CO-LOCATED SENSOR ELEMENTS

A system includes a plurality of acoustic sensor elements co-located with one another, each acoustic sensor element of the plurality of acoustic sensor elements being configured to generate a signal representative of sound incident upon the plurality of acoustic sensor elements, and a processor configured to determine data indicative of a location of a source of the sound based on the signals representative of the incident sound. The plurality of acoustic sensor elements include a directional acoustic sensor element configured to generate a signal representative of a directional component of the sound.