Patent classifications
H04N2007/145
MULTIMEDIA SYSTEM AND MULTIMEDIA OPERATION METHOD
The invention relates to a multimedia system and a multimedia operation method. The multimedia system includes a first portable electronic device, a collaboration device, a camera, and an audio-visual processing device. The first portable electronic device provides a first operation instruction. The collaboration device is coupled to the first portable electronic device and receives the first operation instruction. The collaboration device provides a multimedia picture, and the multimedia picture is changed with the first operation instruction. The camera provides a video image. The audio-visual processing device is coupled to the collaboration device and the camera, and the audio-visual processing device receives the multimedia picture and a video image, and outputs a synthesized image with an immersive audio-visual effect according to the multimedia picture and the video image.
Method for controlling video call and electronic device thereof
An electronic device includes at least one display, a communication circuit, at least one processor, and a memory. The processor is configured to obtain information on a first display aspect ratio associated with a current state of the at least one display if an input indicating initiation of a video call is received. The processor is configured to determine at least one first image ratio associated with the video call based on the information on the first display aspect ratio. The processor is configured to transmit, to an external electronic device, a first signal including information on the at least one first image ratio. The processor is configured to receive, from the external electronic device, a second signal including information on a second image ratio associated with the video call. The processor is configured to perform the video call based on the second image ratio.
Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
Systems and methods for superimposing the human elements of video generated by computing devices, wherein a first user device and second user device capture and transmit video to a central server which analyzes the video to identify and extract human elements, superimpose these human elements upon one another, adds in at least one augmented reality element, and then transmits the newly created superimposed video back to at least one of the user devices.
Methods and systems for utilizing multi-pane video communications in connection with notarizing digital documents
Systems and methods are disclosed for establishing a video connection between a client device and a notary support terminal while enabling the client device and notary support terminal to notarize a digital document. In particular, the client device can display multiple panes simultaneously, the multiple panes including a first pane that displays the video chat and a second pane that displays at least one notarization interface for notarizing the digital document. While the client device displays the multiple panes and during the video chat, the disclosed systems can notarize the digital document by authenticating the user of the client device, collecting signatures from the user of the client device and the notary associated with the notary support terminal, and applying an electronic notary seal to the digital document.
Device Capability Scheduling Method and Electronic Device
A first electronic device includes a first sub-function and a second sub-function. The sub function, and the first sub-function is different from the second sub-function. The first electronic device detects a first operation of a user. The first electronic device displays a first interface in response to the first operation, where the first interface includes a name of the first sub-function, a device identifier of a second electronic device, a name of the second sub-function, and a device identifier of a third electronic device. The first electronic device detects a second operation of the user on the device identifier of the second electronic device. The first electronic device sends data corresponding to the first sub-function to the second electronic device, so that the second electronic device executes the first sub-function.
Method and apparatus for video communication
A method for changing a communication network for video communication is provided. The method includes performing, by a user equipment (UE), video communication through a mobile communication network; searching for whether there is a wireless local area network (WLAN) accessible by the UE; displaying, if a WLAN accessible by the UE is found, the accessible WLAN; and when the displayed WLAN is selected by a user, changing a communication network for the video communication to perform the video communication through the selected WLAN.
Streaming a video chat from a mobile device to a display device using a rotating base
Provided herein are system, apparatus, device, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for streaming a video chat from a mobile device on a display device. In a given embodiment, a first mobile device and second video device can communicate audio and video data to one another, via a video chat. The incoming audio and video data for the first mobile device can be streamed to and output by an external display device. The streaming request is generated in response to the first mobile device coupling with a rotating base, controlled by the first mobile device.
Optimized facial illumination from adaptive screen content
Aspects of the present disclosure relate to adjusting an illumination of a user depicted in one or more images when using a video conferencing application. In one example, one or more images depicting the user may be received from an image sensor. Further, an illumination of the user depicted in the one or more images may be determined to be unsatisfactory. For example, the user's face may be too bright or too dim. Accordingly, content displayed at a display device may identified and then modified. The modified content may then be rendered to a display device thereby changing the illumination of the user depicted in subsequent images. In examples, the modified content may include a graphical element, such as a ring of a specific color at least partially surrounding content rendered to and displayed at the display device.
Simplified sharing of content among computing devices
In one general aspect, a method can include displaying, on a display device included in a computing device, content in an application executing on the computing device, and determining that the computing device is proximate to a videoconferencing system. The method can further include displaying, in a user interface on the display device, at least one identifier associated with a videoconference, receiving a selection of the at least one identifier, and initiating the videoconference on the videoconferencing system in response to receiving the selection of the at least one identifier. The videoconference on the videoconferencing system can be initiated such that the content is provided for display on a display device included in the videoconferencing system.
Multi-camera device
This specification describes: using a first camera of a multi-camera device to obtain first video data of a first region; using a second camera of the multi-camera device to obtain second video data of a second region; generating a multi-camera video output from the first and second video data using a first video mapping to map the first video data to a first portion of the multi-camera video output and using a second video mapping to map the second video data to a second portion of the multi-camera video output; and generating an audio output from obtained audio data, the audio output comprising an audio output having a directional component within the first portion of the video output and an audio output having a directional component within the second portion of the video output, wherein generating the audio output comprises using a first audio mapping to map audio data having a directional component within the first region to the audio output having a directional component within the first portion of the video output and using a second audio mapping to map audio data having a directional component within the second region to the audio output having a directional component within the second portion of the video output.