Patent classifications
H04L65/403
File sharing system and method
A file sharing system and methods therefor share one or more files without requiring the files be sent to or stored on a server. The file sharing system enables files to be shared from a user device allowing users to maintain control of the files by storing and sharing files off the cloud. Sharing and file access is typically effectuated via a server and one or more links provided by the server. File access is limited to selected file access types.
File sharing system and method
A file sharing system and methods therefor share one or more files without requiring the files be sent to or stored on a server. The file sharing system enables files to be shared from a user device allowing users to maintain control of the files by storing and sharing files off the cloud. Sharing and file access is typically effectuated via a server and one or more links provided by the server. File access is limited to selected file access types.
Preventing audio delay-induced miscommunication in audio/video conferences
Embodiments for delay-induced miscommunication reduction are provided. The embodiment may include capturing data streams transmitted between participants in an A/V exchange; translating, on a sender device prior to transmission to a recipient device, an audio stream within the data streams to text; timestamping, on a sender device prior to transmission to the recipient device, each word in the translated audio stream; transmitting the audio stream and the sender-side translated and timestamped audio stream to the recipient device; translating, on the recipient device, the transmitted audio stream to text; timestamping, on the recipient device, each word in the translated audio stream; determining a lag exists in the A/V exchange based on a comparison of each timestamp for corresponding words on the sender-side translated and timestamped audio stream and the recipient-side translated and timestamped audio stream; and generating a true transcript of an intended exchange between the participants based on the comparison.
Integrated Assemblies Containing Ferroelectric Transistors, and Methods of Forming Integrated Assemblies
Some embodiments include a ferroelectric transistor having an active region which includes a first source/drain region, a second source/drain region vertically offset from the first source/drain region, and a channel region between the first and second source/drain regions. A first conductive gate is operatively adjacent to the channel region of the active region. Insulative material is between the first conductive gate and the channel region. A second conductive gate is adjacent to the first conductive gate. Ferroelectric material is between the first and second conductive gates. Some embodiments include integrated memory. Some embodiments include methods of forming integrated assemblies.
Integrated Assemblies Containing Ferroelectric Transistors, and Methods of Forming Integrated Assemblies
Some embodiments include a ferroelectric transistor having an active region which includes a first source/drain region, a second source/drain region vertically offset from the first source/drain region, and a channel region between the first and second source/drain regions. A first conductive gate is operatively adjacent to the channel region of the active region. Insulative material is between the first conductive gate and the channel region. A second conductive gate is adjacent to the first conductive gate. Ferroelectric material is between the first and second conductive gates. Some embodiments include integrated memory. Some embodiments include methods of forming integrated assemblies.
AUTOMATED PAUSING OF AUDIO AND/OR VIDEO DURING A CONFERENCING SESSION
Embodiments include an audio analyzer to analyze audio data received from a user computing system operating as a participant in a conference managed by a conferencing application and to detect one or audio pause conditions; a video analyzer to analyze video data received from the user computing system and to detect one or video pause conditions; and a conferencing manager to automatically pause distribution of the audio data to other participants of the conference when the one or more audio pause conditions are detected and automatically pause distribution of the video data to the other participants when the one or more video pause conditions are detected.
AUTOMATED PAUSING OF AUDIO AND/OR VIDEO DURING A CONFERENCING SESSION
Embodiments include an audio analyzer to analyze audio data received from a user computing system operating as a participant in a conference managed by a conferencing application and to detect one or audio pause conditions; a video analyzer to analyze video data received from the user computing system and to detect one or video pause conditions; and a conferencing manager to automatically pause distribution of the audio data to other participants of the conference when the one or more audio pause conditions are detected and automatically pause distribution of the video data to the other participants when the one or more video pause conditions are detected.
Remote Assistance Method and System, and Electronic Device
A method, performed by a first terminal in a video call process between the first terminal and a second terminal, includes sending an image including a target device to a server, receiving device information corresponding to the target device sent by the server, virtualizing an operation interface based on the received device information, and displaying the operation interface on a current video call screen, where the operation interface is an operation interface of a control panel or a remote control of the target device, and displaying, on the current video call screen based on data from the second terminal, one or more operations performed on the operation interface by a user of the second terminal on the second terminal.
Region of interest-based resolution normalization
Normalized resolutions are determined for first and second regions of interest of an initial video stream captured by a video capture device located within a physical space. The first region of interest is associated with a first conference participant within the physical space and the second region of interest is associated with a second conference participant within the physical space. Instructions are transmitted to the video capture device to cause the video capture device to capture, at the normalized resolutions, a first video stream associated with the first region of interest and a second video stream associated with the second region of interest. The first and second video streams conform sizes and quality levels of the first and second conference participants within separate user interface tiles of a conferencing software user interface to which the first and second video streams are output.
Region of interest-based resolution normalization
Normalized resolutions are determined for first and second regions of interest of an initial video stream captured by a video capture device located within a physical space. The first region of interest is associated with a first conference participant within the physical space and the second region of interest is associated with a second conference participant within the physical space. Instructions are transmitted to the video capture device to cause the video capture device to capture, at the normalized resolutions, a first video stream associated with the first region of interest and a second video stream associated with the second region of interest. The first and second video streams conform sizes and quality levels of the first and second conference participants within separate user interface tiles of a conferencing software user interface to which the first and second video streams are output.