Patent classifications
H04N23/80
METHOD AND A SYSTEM FOR SPATIO-TEMPORAL POLARIZATION VIDEO ANALYSIS
This relates generally to a method and a system for spatio-temporal polarization video analysis. The spatio-temporal polarization data is analyzed for a computer vision application such as object detection, image classification, image captioning, image reconstruction or image inpainting, face recognition and action recognition. Numerous classical and deep learning methods have been applied on polarimetric data for polarimetric imaging analysis, however, the available pre-trained models may not be directly suitable on polarization data, as polarimetric data is more complex. Further compared to analysis of the polarimetric images, a significant number of actions can be detected by polarimetric videos, hence analyzing polarimetric videos is more efficient. The disclosure is a spatio-temporal analysis of polarization video. The disclosed techniques include configuring a set of parameters from the polarization video to train a spatio-temporal deep network architecture for analyzing polarimetric videos for computer vision applications.
METHOD AND A SYSTEM FOR SPATIO-TEMPORAL POLARIZATION VIDEO ANALYSIS
This relates generally to a method and a system for spatio-temporal polarization video analysis. The spatio-temporal polarization data is analyzed for a computer vision application such as object detection, image classification, image captioning, image reconstruction or image inpainting, face recognition and action recognition. Numerous classical and deep learning methods have been applied on polarimetric data for polarimetric imaging analysis, however, the available pre-trained models may not be directly suitable on polarization data, as polarimetric data is more complex. Further compared to analysis of the polarimetric images, a significant number of actions can be detected by polarimetric videos, hence analyzing polarimetric videos is more efficient. The disclosure is a spatio-temporal analysis of polarization video. The disclosed techniques include configuring a set of parameters from the polarization video to train a spatio-temporal deep network architecture for analyzing polarimetric videos for computer vision applications.
Camera orchestration technology to improve the automated identification of individuals
Systems, apparatuses and methods may provide for technology that detects an unidentified individual at a first location along a trajectory in a scene based on a video feed of the scene, wherein the video feed is to be associated with a stationary camera, and selects a non-stationary camera from a plurality of non-stationary cameras based on the trajectory and one or more settings of the selected non-stationary camera. The technology may also automatically instruct the selected non-stationary camera to adjust at least one of the one or more settings, capture a face of the individual at a second location along the trajectory, and identify the unidentified individual based on the captured face of the unidentified individual.
Stereo correspondence search
Methods, systems, devices and computer software/program code products enable efficiently finding stereo correspondence between a feature or set of features in a first image or signal, and a search domain in a second image or signal.
Selection of a preferred image from multiple captured images
Systems, methods, and computer-readable media are disclosed for selection of a preferred image from multiple captured images. An image corresponding to a photograph time t=0 may be retrieved from a circular buffer and stored as a preferred image. Alterative images captures before and after the t=0 image may be retrieves and stored in an alternative image location. The t=0 image and preferred images may be presented to a user in a user interface. The user may select a preferred image for the photograph from among the t=0 image and the alternative images.
Selection of a preferred image from multiple captured images
Systems, methods, and computer-readable media are disclosed for selection of a preferred image from multiple captured images. An image corresponding to a photograph time t=0 may be retrieved from a circular buffer and stored as a preferred image. Alterative images captures before and after the t=0 image may be retrieves and stored in an alternative image location. The t=0 image and preferred images may be presented to a user in a user interface. The user may select a preferred image for the photograph from among the t=0 image and the alternative images.
Determining X,Y,Z,T biomechanics of moving actor with multiple cameras
A plurality of tracking cameras is pointed towards a routine hovering area of an in-the-field sports participant who routinely hovers about that area. Spots within the hovering area are registered relative to a predetermined multi-dimensional coordinates reference frame (e.g., Xw, Yw, Zw, Tw) such that two-dimensional coordinates of 2D images captured by the tracking cameras can be converted to multi-dimensional coordinates of the reference frame. A body part recognizing unit recognizes 2D locations of a specific body part in the 2D captured images and a mapping unit maps them into the multi-dimensional coordinates of the reference frame. A multi-dimensional curve generator then generates a multi-dimensional motion curve describing motion of the body part based on the mapped coordinates (e.g., Xw, Yw, Zw, Tw). The generated multi-dimensional motion curve is used to discover cross correlations between play action motions of the in-the-field sports participant and real-world sports results.
Determining X,Y,Z,T biomechanics of moving actor with multiple cameras
A plurality of tracking cameras is pointed towards a routine hovering area of an in-the-field sports participant who routinely hovers about that area. Spots within the hovering area are registered relative to a predetermined multi-dimensional coordinates reference frame (e.g., Xw, Yw, Zw, Tw) such that two-dimensional coordinates of 2D images captured by the tracking cameras can be converted to multi-dimensional coordinates of the reference frame. A body part recognizing unit recognizes 2D locations of a specific body part in the 2D captured images and a mapping unit maps them into the multi-dimensional coordinates of the reference frame. A multi-dimensional curve generator then generates a multi-dimensional motion curve describing motion of the body part based on the mapped coordinates (e.g., Xw, Yw, Zw, Tw). The generated multi-dimensional motion curve is used to discover cross correlations between play action motions of the in-the-field sports participant and real-world sports results.
Method for displaying image in photographing scenario and electronic device
Disclosed herein is a method for generating an image using an electronic device having a color camera, comprising: activating the color camera and a camera application on the electronic device; displaying, through the camera application, a preview image generated by the color camera; determining, automatically, whether the preview image includes an image of a first object; displaying, through the camera application in response to a determination that the preview image includes the image of the first object, a first image generated by the color camera, the first image including a color region corresponding to the first object and a grayscale region corresponding to objects that are not the first object; and displaying, through the camera application in response to a determine that the preview image does not include any image of the first object, a second image generated by the color camera, the second image is a grayscale image.
Method for displaying image in photographing scenario and electronic device
Disclosed herein is a method for generating an image using an electronic device having a color camera, comprising: activating the color camera and a camera application on the electronic device; displaying, through the camera application, a preview image generated by the color camera; determining, automatically, whether the preview image includes an image of a first object; displaying, through the camera application in response to a determination that the preview image includes the image of the first object, a first image generated by the color camera, the first image including a color region corresponding to the first object and a grayscale region corresponding to objects that are not the first object; and displaying, through the camera application in response to a determine that the preview image does not include any image of the first object, a second image generated by the color camera, the second image is a grayscale image.