Patent classifications
H04N5/9305
APPARATUS FOR GENERATING AND TRANSMITTING ANNOTATED VIDEO SEQUENCES IN RESPONSE TO MANUAL AND IMAGE INPUT DEVICES
In accordance with the invention, a health care information generation and communication system comprises a body part image generation device for generating body part image information representing a body part of a patient. A body part image database is coupled to receive the output of the body part image generation device and store the image information as a stored image. A stored image playback device is coupled to the body part image database and generating a recovered image from the image information. An image control device is coupled to the stored image playback device to select a desired portion of the body part image information and output the selected portion as a selected image. A video generation device is coupled to the image control device to receive the selected image from the stored image playback device and coupled to a microphone and combine the same into an output video. The output video thus comprises visual and audible elements. A video database is coupled to receive the visual and audible elements of the output video from the output of the video generation device and store the visual and audible elements. A video player presents a display of at least a portion of the visible and audible elements.
MOBILE ROBOT, REMOTE TERMINAL, CONTROL PROGRAM FOR MOBILE ROBOT, CONTROL PROGRAM FOR REMOTE TERMINAL, CONTROL SYSTEM, CONTROL METHOD FOR MOBILE ROBOT, AND CONTROL METHOD FOR REMOTE TERMINAL
A mobile robot including an image pickup unit, further includes an accumulation unit configured to accumulate imaging data taken in the past, a reception unit configured to receive designation of a spatial area from a remote terminal, and a processing unit configured, when it is possible to shoot the spatial area received by the reception unit by the image pickup unit, to perform shooting and transmit obtained imaging data to the remote terminal, and when the shooting is impossible, to transmit imaging data including an image of the spatial area accumulated in the accumulation unit to the remote terminal and start a moving process in order to shoot the spatial area by the image pickup unit.
Time-correlated ink
Techniques for time-correlated ink are described. According to various embodiments, ink input is correlated to content. For instance, ink input received during playback of a video is timestamped. According to various embodiments, ink input displayed over content is removed after input ceases. Further, ink input is displayed during playback of the portion of content to which the ink input is time correlated.
VIDEO PRODUCTION METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM
A video production method and apparatus, a storage medium, and a computer device are disclosed. The method includes: receiving a follow-shot instruction in a case that reference video content is played on a video play interface, the follow-shot instruction including a reference video identifier; displaying a first video display region and a second video display region on a terminal screen; playing the reference video content in the first video display region, and recording displayed real-time video content in the second video display region; and generating a target video based on the recorded real-time video content and the reference video content. The first video display region and the second video display region are displayed on the terminal screen, the reference video content is played in the first video display region, and the displayed real-time video content is recorded in the second video display region.
Spatialized rendering of real-time video data to 3D space
A 360 video is presented in a three dimensional (3D) environment. Rather than simply stacking graphics in two dimensions, graphics are placed using both 3D models and textures. The 3D models may be altered so that the texture is aligned in three dimensions into the 360 video space. An instance of a 3D model combined with a key and fill texture form a group. The group has a 3D orientation and placement so that the group as aligned into the 360 degree video space may not be visible from all user look directions. The inserted groups, including live video as well as static graphics, may be projected into either mono or stereo views to give the viewer a sense of space, depth, and orientation.
Media effects using predicted facial feature locations
An effects application receives a video of a face and detects a bounding box for each frame indicating the location and size of the face in each frame. In one or more reference frames. The application uses an algorithm to determine locations of facial features in the frame. The application then normalizes the feature locations relative to the bounding box and saves the normalized feature locations. In other frames (e.g., target frames), the application obtains the bounding box and then predicts the locations of the facial features based on the size and location of the bounding box and the normalized feature locations calculated in the reference frame. The predicted locations can be made available to an augmented reality function that overlays graphics in a video stream based on face tracking in order to apply a desired effect to the video.
Dynamic haptic generation based on detected video events
A method or system that receives input media including at least video data in which a video event within the video data is detected. Related data that is associated with the detected video event is collected and one or more feature parameters are configured based on the collected related data. The type of video event is determining and a set of feature parameters is selected based on the type of video event. A haptic effect is then automatically generated based on the selected set of feature parameters.
Automatic annotation of audio-video sequences
In some examples, a facility augments an audio-video sequence playback display with respect to a current playback position of the audio-video sequence within a time index range of the sequence. For a first portion of the time index range of the sequence containing the current playback position (CPP), the facility performs automatic voice transcription against the audio component to obtain speech text for at least one speaker. For a second portion of the time index range of the sequence containing the CPP, the facility performs automatic image recognition against the video component to obtain identifying information identifying at least one person, object, or location. Simultaneously with the sequence playback display and proximate to the sequence playback display, the facility displays one or more annotations each based upon (a) at least a portion of the obtained speech text, (b) at least a portion of the obtained identifying information, or (c) both.
SYSTEM AND METHOD FOR PRESENTING VIRTUAL REALITY CONTENT TO A USER
This disclosure describes a system configured to present primary and secondary, tertiary, etc., virtual reality content to a user. Primary virtual reality content may be displayed to a user, and, responsive to the user turning his view away from the primary virtual reality content, a sensory cue is provided to the user that indicates to the user that his view is no longer directed toward the primary virtual reality content, and secondary, tertiary, etc., virtual reality content may be displayed to the user. Primary virtual reality content may resume when the user returns his view to the primary virtual reality content. Primary virtual reality content may be adjusted based on a user's interaction with the secondary, tertiary, etc., virtual reality content. Secondary, tertiary, etc., virtual reality content may be adjusted based on a user's progression through the primary virtual reality content, or interaction with the primary virtual reality content.
PROVISIONING OF DIGITAL CONTENT FOR REPRODUCTION IN A MOBILE DEVICE
Technologies are provided to generate digital content for reproduction in mobile devices. The digital content can be provided in a memory card that includes a non-volatile memory device and processing circuitry. The digital content can be generated using a combination of source digital content accessed from a storage assembly and at least one of reproduction information or production configuration information. The reproduction information conveys one or multiple configurations for playback of the source digital content. The production configuration information conveys one or multiple satisfactory configurations for content reproduction at a mobile device. The digital content is formatted according to a defined video format and includes 3D content. In some instances, the digital content also can include 4D content, which includes 3D content and information for controlling haptic effects related to the 3D content. The digital content can be provided with digital rights management (DRM) information.