Patent classifications
H04N5/14
Techniques for secure video frame management
Devices, methods, and computer-readable medium for secure frame management. The techniques disclosed herein provide an intelligent method for detecting triggering items in one or more frames of streaming video from an Internet Protocol camera. Upon detection, the camera transmits one or more frames of the video over a network to a computing device. Upon detecting a triggering item in a frame of the video stream, the computing device can begin a streaming session with a server and stream the one or more frames of video and accompanying metadata to the server. The frames, metadata, and associated keys can all be encrypted prior to streaming to the server. For each subsequent segment of video frames that includes the triggering item, the server can append the frames of that segment to the video clip in an encrypted container. Once the triggering item is no longer detected, the streaming session can be closed.
Methods for camera movement compensation
A method, system, apparatus, and/or device for adjusting or removing frames in a set of frames. The method, system, apparatus, and/or device may include: associating a first frame of a set of frames with motion data that is captured approximately contemporaneously with the first frame; when a sampling rate of the motion data is greater than a frame rate of the set of frames, aggregating a first sample of the motion data captured at the first frame and a second sample of the motion data captured between the first frame and a second frame of the set of frames to obtain a movement value; when the movement value does not exceed a first threshold value, accepting the first frame from the set of frames; and when the movement value exceeds the first threshold value, rejecting the first frame from the set of frames.
Time compressing video content
Methods and systems for compressing video content are presented. The methods and systems include analyzing a sequence of media frames stored in the memory device and calculating a displacement level of each of the media frames. The displacement level indicates how different each of the media frames is to a previous media frame. The sequence of media frames is divided into a plurality of cuts where each cut ends at a media frame having a substantially high displacement level. Frames to be removed from the sequence of media frames are identified in each cut based upon the frame's displacement level. The identified frames are then removed.
Reducing judder using motion vectors
A method at a client device for mitigating motion judder in frames of an image due to display data for a particular frame being unavailable at a required time at the client device. The method involves receiving (S35) the display data for a current frame n, and generating (S3Y3) the current frame n from the received display data. Motion vectors for some elements of the image in the current frame n are obtained (S3Y1). If it is determined that display data for the next frame n+1 is not available, the next frame n+1 is generated (S3N4) from either the current frame n or a previous frame n−m, where m=1, 2, 3, etc, adjusted based on an extrapolation (S3N3) of the motion vectors for the elements of the image in either the current frame n or the previous frame n−m.
Image capturing apparatus, image processing apparatus, image processing method, image capturing apparatus calibration method, robot apparatus, method for manufacturing article using robot apparatus, and recording medium
An image capturing apparatus including a lens and a processing unit, wherein the lens includes a first region through which a first light ray passes and a second region through which a second light ray passes, wherein the first region and the second region are arranged in a predetermined direction, and wherein the processing unit sets a component of the predetermined direction as a degree of freedom in a first relative positional relationship between a predetermined position in the first region and a predetermined position in the second region is employed.
Image capturing apparatus, image processing apparatus, image processing method, image capturing apparatus calibration method, robot apparatus, method for manufacturing article using robot apparatus, and recording medium
An image capturing apparatus including a lens and a processing unit, wherein the lens includes a first region through which a first light ray passes and a second region through which a second light ray passes, wherein the first region and the second region are arranged in a predetermined direction, and wherein the processing unit sets a component of the predetermined direction as a degree of freedom in a first relative positional relationship between a predetermined position in the first region and a predetermined position in the second region is employed.
Measurement of vital signs based on images recorded by an egocentric camera
A method for determining one or more vital signs of a person includes recording video images of a scene with an egocentric camera coupled to the person's body, detecting and magnifying image frame-to-image frame movements in the video images of the scene, representing the magnified image frame-to-image frame movements in the video images of the scene by a one-dimensional (1D) amplitude-versus-time series, and transforming the 1D amplitude-versus-time series representation into a frequency spectrum. The method further includes identifying one or more local frequency maxima in the frequency spectrum as corresponding to one or more vital signs of the person.
APPARATUS AND METHODS FOR PROVIDING PRECISE MOTION ESTIMATION LEARNING MODEL
The present disclosure is an apparatus and a method for providing a precise motion estimation learning model including, a database unit which stores a standard dataset labeled according to a first number of key points, an animation dataset labeled according to a second number of key points which is larger than the first number, and a photorealistic dataset having the second number of key points, a standard learning unit which learns the standard dataset for motion estimation to generate a standard learning model, an animation learning unit which retrains the animation dataset based on a weight of the standard learning model to generate an animation learning model, and a motion estimation learning unit which trains the photorealistic dataset based on the weight of the animation learning model to finely tune to generate a precise motion estimation learning model.
Apparatus and method for visualizing periodic motions in mechanical components
An apparatus for visualizing physical movements includes: a device for acquiring video image files; a data analysis system including processor and memory; a computer program operating in the processor to identify an area in the images where periodic motions associated with physical movement of an object may be detected and quantified, and compute a new image sequence in which the motions are visually amplified; and, a user interface that displays the motion-amplified video image of the mechanical component. An associated method for using the apparatus is also disclosed.
Synchronization and presentation of multiple 3D content streams
Systems, methods, and computer-readable media are disclosed for synchronization and presentation of multiple 3D content streams. Example methods may include determining a first content stream of 3D content to send to a user device, where movement of the user device causes presentation of different portions of the 3D content at the user device, and determining a first position of the user device. Some methods may include causing presentation of a first portion of the first content stream at the user device, where the first portion corresponds to the first position, determining a second content stream of 3D content, where movement of the user device causes presentation of different portions of the 3D content at the user device, and causing presentation of a second portion of the second content stream at the user device, where the second portion corresponds to the first position of the user device.