Patent classifications
H04N5/92
COMBINING VIDEO STREAMS HAVING DIFFERENT INFORMATION-BEARING LEVELS
A method includes recording a first video stream characterized by a first value of a first quality characteristic. The method includes determining that the first video stream satisfies a trigger criterion. The trigger criterion characterizes a threshold amount of video content change information. The method includes, in response to determining that the first video stream satisfies the trigger criterion, obtaining a second video stream characterized by a second value of a second quality characteristic. The second video stream includes scene information also included in the first video stream. The second value of the second quality characteristic is indicative of a higher quality video stream than the first value of the first quality characteristic. The method includes generating a third video stream by adding information from the second video stream to the first video stream. The third video stream corresponds to a higher quality version of the first video stream.
METHOD FOR GENERATING FILE INCLUDING IMAGE DATA AND MOTION DATA, AND ELECTRONIC DEVICE THEREFOR
An electronic device includes a first image sensor for acquiring image data, a sensor for acquiring motion information about an object included in the image data, a memory, and a processor operatively connected to the first image sensor, the sensor and the memory. The processor can acquire image data at a first rate through the first image sensor, acquire through the at least one sensor and second data on the object at a second rate which is higher than the first rate. The second data corresponds to the image data and the data acquired while the image data is acquired. The processor can further generate a package file including the acquired image data and second data.
Video playback system, video playback method, and video playback program
A problem to be solved is to realize a moving image playback system capable of sequentially referring to a desired scene in sports on the basis of at least a score in a game. Realized is a moving image playback system provided with an input means that performs input processing of a time stamp corresponding to play result information including at least a score, a display means that performs display processing of each of a plurality of display objects indicating the play result information and corresponding to the time stamp, and a management means that associates a moving image with the plurality of display objects, in which the display means performs playback processing of the moving image based on the time stamp in response to designation of the display object as a turning point.
Image processing device, image processing method, and image processing program
To implement a video processing device, a video processing method, and a video processing program capable of estimating a movement vector from a video content and providing processing information based on the movement vector to a haptic device or other force sense presentation devices, a video processing device according to the present disclosure includes a scene identification unit to estimate scene class information that is information identifying a scene class for a video content and a plurality of movement information estimation units to estimate a movement vector from the video content. One movement information estimation unit that is selected from the plurality of the movement information estimation units in response to the scene class identified by the scene class information estimates the movement vector.
SURVEILLANCE CAMERA SYSTEM AND METHOD FOR OPERATING SAME
A surveillance camera system according to an embodiment of the present disclosure comprises: multiple surveillance cameras for capturing images of different surveillance regions to surveil a protection object; an event management server which is connected to the surveillance cameras through a communication network and receives a first event signal or a second event signal from at least one of the surveillance cameras; and a manager terminal for receiving event information corresponding to the second event signal from the event management server when the second event signal is generated, wherein the first event signal corresponds to a signal generated when the protection object is detected by at least one of the multiple surveillance cameras, and the second event signal corresponds to a signal generated when the protection object is detected by none of the multiple surveillance cameras during a preconfigured reference time.
Apparatus and method for video-audio processing, and program for separating an object sound corresponding to a selected video object
The present technique relates to an apparatus and a method for video-audio processing, and a program each of which enables a desired object sound to be more simply and accurately separated. A video-audio processing apparatus includes a display control portion configured to cause a video object based on a video signal to be displayed; an object selecting portion configured to select the predetermined video object from the one video object or among a plurality of the video objects; and an extraction portion configured to extract an audio signal of the video object selected by the object selecting portion as an audio object signal. The present technique can be applied to a video-audio processing apparatus.
Method and apparatus for classifying video data
A method of classifying video data representing activity within a space to be monitored. A method comprises storing video data obtained from a camera configured to monitor the space. Sensor data indicative of a condition occurring within the space is obtained, and a plurality of programme elements are defined within the video data. Each programme element has an associated classification code, and each classification code is selected using the sensor data.
System and method to verify date and location for the creation of a video
A verification system includes: a code generation server publicizing time stamped codes; a proving device, including a video camera, that acquires a published time stamped code from the code generation server, and, while recording a video, incorporates the acquired time stamped code into the video; and a verifying device that receives the video, extracts the time stamped code from the content of the video, compares the time stamped code to published time stamped codes, and displays a verification of the published time stamped code.
Associating faces with voices for speaker diarization within videos
A computer-implemented method for speech diarization is described. The method comprises determining temporal positions of separate faces in a video using face detection and clustering. Voice features are detected in the speech sections of the video. The method further includes generating a correlation between the determined separate faces and separate voices based at least on the temporal positions of the separate faces and the separate voices in the video. This correlation is stored in a content store with the video.
Apparatus for video output and associated methods
An apparatus comprising a processor and memory including computer program code, the memory and computer program code configured to, with the processor, enable the apparatus at least to: use received current-field-of-view indication data together with future-event-direction data, in respect of recorded panoramic video output provided by panoramic video content data, to provide a sensory cue for a viewer of the recorded panoramic video output to indicate the direction of a future event in the recorded panoramic video output which is outside a current field of view, wherein the recorded panoramic video output is configured to provide video content to the viewer which extends outside the field of view of the viewer in at least in one direction, and the future-event-direction data is supplemental to the panoramic video content data which provides the video content itself.