Patent classifications
H04N7/04
INSTRUCTION INPUT APPARATUS WITH PANORAMIC PHOTOGRAPHY FUNCTION
An instruction input apparatus is provided. The instruction input apparatus comprises a keyboard, on which a plurality of buttons are disposed; a camera module, which comprises a supporting bar and a camera, wherein a first end of the supporting bar is connected to the keyboard, the camera is disposed on a second end of the supporting bar so as to protrude the camera from the keyboard, and the camera comprises at least two lens modules for capturing a plurality of image frames; and an image processing unit, which is disposed in the keyboard and is signal-connected to the camera so that the image processing unit receives and processes the image frames for merging the image frames into a panorama image. Therefore, the instruction input apparatus would improve the functionality of the instruction input apparatus and provide function of all viewing angles photographing so that the instruction input apparatus has an advantage of taking all desired images together.
INSTRUCTION INPUT APPARATUS WITH PANORAMIC PHOTOGRAPHY FUNCTION
An instruction input apparatus is provided. The instruction input apparatus comprises a keyboard, on which a plurality of buttons are disposed; a camera module, which comprises a supporting bar and a camera, wherein a first end of the supporting bar is connected to the keyboard, the camera is disposed on a second end of the supporting bar so as to protrude the camera from the keyboard, and the camera comprises at least two lens modules for capturing a plurality of image frames; and an image processing unit, which is disposed in the keyboard and is signal-connected to the camera so that the image processing unit receives and processes the image frames for merging the image frames into a panorama image. Therefore, the instruction input apparatus would improve the functionality of the instruction input apparatus and provide function of all viewing angles photographing so that the instruction input apparatus has an advantage of taking all desired images together.
Method and apparatus to transmit video data
A video system includes at least one video subsystem including a video source coupled to a mobile platform. The video source is configured to capture and transmit video data. A video processing system is configured to receive transmission of the video data from the video subsystem at a plurality of locations in which the video subsystem is configured to transmit the video data to the video processing system automatically when the video subsystem is in range of the video processing system at each of the plurality of locations.
Method and apparatus to transmit video data
A video system includes at least one video subsystem including a video source coupled to a mobile platform. The video source is configured to capture and transmit video data. A video processing system is configured to receive transmission of the video data from the video subsystem at a plurality of locations in which the video subsystem is configured to transmit the video data to the video processing system automatically when the video subsystem is in range of the video processing system at each of the plurality of locations.
PEER TO PEER COMMUNICATION SYSTEM AND METHOD
A peer to peer communication system and method are provided to enable interfacing with an application running on a gaming engine for an avatar simulation or video conference. The system and method establish a real-time peer-to-peer communication link between remotely located users for transmission in real-time of audio, video, and data communications. The system and method capture incoming audio and video transmissions from input devices operable by the users while controlling one or more avatars, and transmit, in real time, synchronized audio, video, and data communications to the users over the communication link.
PEER TO PEER COMMUNICATION SYSTEM AND METHOD
A peer to peer communication system and method are provided to enable interfacing with an application running on a gaming engine for an avatar simulation or video conference. The system and method establish a real-time peer-to-peer communication link between remotely located users for transmission in real-time of audio, video, and data communications. The system and method capture incoming audio and video transmissions from input devices operable by the users while controlling one or more avatars, and transmit, in real time, synchronized audio, video, and data communications to the users over the communication link.
COMMUNICATION TERMINAL, APPLICATION PROGRAM FOR COMMUNICATION TERMINAL, AND COMMUNICATION METHOD
The present invention provides a communication terminal, an application program for communication terminal, and a communication method, which can record a video during a group call and store moving image data in a user's communication terminal or deliver the recorded video data added with voice data from a user's communication terminal. The video recording mode is switched on during a group call, user's own voice data 50, intended person's voice data 52, and video recording data 54 are acquired by the communication terminal 10A, and the user's own voice data 50 and the intended person's voice data 52 are added to the video recording data 54, whereby moving image data 56 is generated. Therefore, a video can be recorded during a group call, and the moving image data 56 can be stored in a user's communication terminal 10A, including a user's experience. Furthermore, the user's own voice data 50 and the intended person's voice data 52 are added to the video recording data 52, and the added data is live-streamed to other communication terminals so that user's experience is shared with others.
COMMUNICATION TERMINAL, APPLICATION PROGRAM FOR COMMUNICATION TERMINAL, AND COMMUNICATION METHOD
The present invention provides a communication terminal, an application program for communication terminal, and a communication method, which can record a video during a group call and store moving image data in a user's communication terminal or deliver the recorded video data added with voice data from a user's communication terminal. The video recording mode is switched on during a group call, user's own voice data 50, intended person's voice data 52, and video recording data 54 are acquired by the communication terminal 10A, and the user's own voice data 50 and the intended person's voice data 52 are added to the video recording data 54, whereby moving image data 56 is generated. Therefore, a video can be recorded during a group call, and the moving image data 56 can be stored in a user's communication terminal 10A, including a user's experience. Furthermore, the user's own voice data 50 and the intended person's voice data 52 are added to the video recording data 52, and the added data is live-streamed to other communication terminals so that user's experience is shared with others.
METHOD FOR REAL TIME WHITEBOARD EXTRACTION WITH FULL FOREGROUND IDENTIFICATION
A method to extract static user content on a marker board is disclosed. The method includes generating a sequence of samples from a video stream comprising a series of images of the marker board, generating at least one center of mass (COM) of estimated foreground content of each sample in the sequence of samples, detecting, based on a predetermined criterion, a stabilized change of the at least one COM in the sequence of samples, wherein the stabilized change of the at least one COM identifies, in the sequence of samples, a stable sample with new content, generating, in response to the stabilized change of the at least one COM and from the stable sample with new content, a mask of full foreground content, and extracting, based at least on the mask of full foreground content, a portion of the static user content from the video stream.
METHOD FOR REAL TIME WHITEBOARD EXTRACTION WITH FULL FOREGROUND IDENTIFICATION
A method to extract static user content on a marker board is disclosed. The method includes generating a sequence of samples from a video stream comprising a series of images of the marker board, generating at least one center of mass (COM) of estimated foreground content of each sample in the sequence of samples, detecting, based on a predetermined criterion, a stabilized change of the at least one COM in the sequence of samples, wherein the stabilized change of the at least one COM identifies, in the sequence of samples, a stable sample with new content, generating, in response to the stabilized change of the at least one COM and from the stable sample with new content, a mask of full foreground content, and extracting, based at least on the mask of full foreground content, a portion of the static user content from the video stream.