Patent classifications
H04N9/82
Dynamically modeling an object in an environment from different perspectives
An object can be simulated in an environment using a three-dimensional model of the object as viewed from a virtual camera at a position in the environment. The position in the environment can be determined using user input or through visual analysis of a video recording. Composite frames depicting the modeled object may be played back based on the orientation of the playback device.
MEDIA CLIP CREATION AND DISTRIBUTION SYSTEMS, APPARATUS, AND METHODS
Various embodiments for creating media clips are disclosed. In one example, a method is performed by a server for managing the creation and distribution of media clips, where the server associates a content capture device with an event, the content capture device for recording at least a portion of the event, receives a tag notification from a content tagging device via a network interface, generates a media clip creation command to the content capture device via the network interface, sends the media clip creation command to the content capture device, and receives a media clip created by the content capture device in response to receiving the media clip creation command.
Image capturing method and display method for recognizing a relationship among a plurality of images displayed on a display screen
An image capturing apparatus includes a capture part acquiring image data, and a display displaying an image on a display based on the image data. First information acquired includes at least either an azimuth angle or an elevation angle as a direction of the image and at least either an angle of view of the image or angle-of-view related information calculating the angle of view. When a second direction of a second image is included within a range of a first angle of view in a first direction of a first image, the second image is associated with the first image, and the first image is displayed within the display, and display is performed in a state in which the second image to be associated with the first image is overlapped on the first image within the display as the second image or second information indicating the second image.
VIDEO PROCESSING SYSTEM
A video processing system includes: an object movement information acquiring means for detecting a moving object moving in a plurality of segment regions from video data obtained by shooting a monitoring target area, and acquiring movement segment region information as object movement information, the movement segment region information representing segment regions where the detected moving object has moved; an object movement information and video data storing means for storing the object movement information in association with the video data corresponding to the object movement information; a retrieval condition inputting means for inputting a sequence of the segment regions as a retrieval condition; and a video data retrieving means for retrieving the object movement information in accordance with the retrieval condition and outputting video data stored in association with the retrieved object movement information, the object movement information being stored by the object movement information and video data storing means.
Wearable Multimedia Device and Cloud Computing Platform with Application Ecosystem
Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for a wearable multimedia device and cloud computing platform with an application ecosystem for processing multimedia data captured by the wearable multimedia device. In an embodiment, a method comprises: receiving, by one or more processors of a cloud computing platform, context data from a wearable multimedia device, the wearable multimedia device including at least one data capture device for capturing the context data; creating a data processing pipeline with one or more applications based on one or more characteristics of the context data and a user request; processing the context data through the data processing pipeline; and sending output of the data processing pipeline to the wearable multimedia device or other device for presentation of the output.
Methods and apparatus for re-timing and scaling input video tracks
The techniques described herein relate to methods, apparatus, and computer readable media configured to access multimedia data comprising a hierarchical track structure comprising at least a first track at a first level of the hierarchical track structure comprising first media data, wherein the first media data comprises a first sequence of video media units, and a second track at a second level in the hierarchical track structure different than the first level of the first track, the second track comprising metadata specifying a re-timing derivation operation. Output video media units are generated according to the second track, comprising performing the re-timing derivation operation on the first sequence of video media units to modify a timing of the first sequence of video media units by removing one or more video media units associated with the re-timing derivation operation and/or shifting timing information of the first sequence of video media units.
Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display
A system includes a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor is configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.
Wearable Multimedia Device and Cloud Computing Platform with Application Ecosystem
Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for a wearable multimedia device and cloud computing platform with an application ecosystem for processing multimedia data captured by the wearable multimedia device. In an embodiment, a method comprises: receiving, by one or more processors of a cloud computing platform, context data from a wearable multimedia device, the wearable multimedia device including at least one data capture device for capturing the context data; creating a data processing pipeline with one or more applications based on one or more characteristics of the context data and a user request; processing the context data through the data processing pipeline; and sending output of the data processing pipeline to the wearable multimedia device or other device for presentation of the output.
Playback device, playback method, and recording medium
A decoding system decodes a video stream, which is encoded video information. The decoding system includes a decoder that acquires the video steam and generates decoded video information, and a maximum luminance information acquirer that acquires, in a case where a dynamic range of luminance of the video stream is a second dynamic range that is wider than a first dynamic range, maximum luminance information indicating the maximum luminance of the video stream from the video stream. The decoding system also includes an outputter that outputs the decoded video information and the maximum luminance information. Where the dynamic range of luminance of the video stream is expressed by the maximum luminance of all pictures in the video stream as the maximum luminance information, the outputter outputs the decoded video information, along with the maximum luminance information indicating the maximum luminance of all pictures in the video stream.
Breath analyzer, system, and computer program for authenticating, preserving, and presenting breath analysis data
A breath analyzer, a system, and a computer program for administering a breath analysis to a donor and recording a breath analysis result. Embodiments of the invention authenticate that the breath analysis was performed correctly, preserves the breath analysis results by communicating with other devices, and presents the breath analysis results by superimposing them on recorded video data. The breath analyzer includes a breath receptor for receiving a breath sample from the donor, an analyzing element for determining a breath analysis result, a communications element for sending information indicative of the breath analysis result to a recording device manager, and a housing for securing the components in a handheld device. The system comprises the breath analyzer, a recording device manager for synchronizing the recordings, and at least one ancillary camera for recording the breath analysis.