Patent classifications
H04N5/265
IMAGING SYSTEM AND METHOD OF CREATING COMPOSITE IMAGES
An imaging system and a method of creating composite images are provided. The imaging system includes one or more lens assemblies coupled to a sensor. When reflected light from an object enters the imaging system, incident light on the metalens filter systems creates filtered light, which is turned into composite images by the corresponding sensors. Each metalens filter system focuses the light into a specific wavelength, creating the metalens images. The metalens images are sent to the processor, wherein the processor combines the metalens images into one or more composite images. The metalens images are combined into a composite image, and the composite image has reduced chromatic aberrations.
Depth-based image stabilization
Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.
Depth-based image stabilization
Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.
Video synthesis method, apparatus, computer device and readable storage medium
The present disclosure provides a video synthesis method, apparatus, computer device and computer-readable storage medium, which the method includes: acquiring a first video; capturing second video data photographed in real time; performing first encoding on the second video data to obtain an encoded video; synthesizing the first video and the encoded video to obtain synthesized video data; and performing second encoding on the synthesized video data to obtain a target video. By means of the method, there is less loss in the obtained target video frames and a relatively high definition of the video frames.
Video synthesis method, apparatus, computer device and readable storage medium
The present disclosure provides a video synthesis method, apparatus, computer device and computer-readable storage medium, which the method includes: acquiring a first video; capturing second video data photographed in real time; performing first encoding on the second video data to obtain an encoded video; synthesizing the first video and the encoded video to obtain synthesized video data; and performing second encoding on the synthesized video data to obtain a target video. By means of the method, there is less loss in the obtained target video frames and a relatively high definition of the video frames.
Personalized videos using selfies and stock videos
A method is provided that includes displaying, by a computing device, representations of a plurality of stock videos to a user. The representations are at a still image, a partial clip, and/or a full play of the stock video. Each of the representations include a face outline for insertion of a facial image of a user. When the user has provided a self-image to the computing device, the facial image of the user is inserted in the face outline of the representations. The facial image is extracted from the self-image. The method may include receiving a selection of one of the representations of the plurality of stock videos, and displaying a personalized video including a selected stock video with the facial image positioned within a further face outline corresponding to the face outline of the selected representation.
Personalized videos using selfies and stock videos
A method is provided that includes displaying, by a computing device, representations of a plurality of stock videos to a user. The representations are at a still image, a partial clip, and/or a full play of the stock video. Each of the representations include a face outline for insertion of a facial image of a user. When the user has provided a self-image to the computing device, the facial image of the user is inserted in the face outline of the representations. The facial image is extracted from the self-image. The method may include receiving a selection of one of the representations of the plurality of stock videos, and displaying a personalized video including a selected stock video with the facial image positioned within a further face outline corresponding to the face outline of the selected representation.
Methods, devices, and systems for video segmentation and annotation
Methods, devices, and systems for segmenting and annotating videos for analysis are disclosed. A user identifies specific moments of the video that provide a teachable moment. A pre-context and a post-context portion of the video surrounding the identified moment are used to create a tile video. One or more tile videos are compiled in a user-defined order to generate a weave video with a specific focus or theme. The generated weave video is shared with one or more users and can be annotated to facilitate teaching and/or discussion.
Methods, devices, and systems for video segmentation and annotation
Methods, devices, and systems for segmenting and annotating videos for analysis are disclosed. A user identifies specific moments of the video that provide a teachable moment. A pre-context and a post-context portion of the video surrounding the identified moment are used to create a tile video. One or more tile videos are compiled in a user-defined order to generate a weave video with a specific focus or theme. The generated weave video is shared with one or more users and can be annotated to facilitate teaching and/or discussion.
Systems, apparatus, and methods for generating enhanced images
Described examples relate to an apparatus comprising a first sensor configured to scan an area of interest during a first time period and a second sensor configured to capture a plurality of images of a field of view. The apparatus may include at least one controller configured to receive the plurality of images captured by the second sensor, compare the timestamp information associated with at least one image of the plurality of images to at least one time period of the first time period, and select a base image from the plurality of images based on the comparison.