Patent classifications
G06T7/215
Adaptive video streaming
A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof.
Adaptive video streaming
A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof.
Recognition of activity in a video image sequence using depth information
Techniques are provided for recognition of activity in a sequence of video image frames that include depth information. A methodology embodying the techniques includes segmenting each of the received image frames into a multiple windows and generating spatio-temporal image cells from groupings of windows from a selected sub-sequence of the frames. The method also includes calculating a four dimensional (4D) optical flow vector for each of the pixels of each of the image cells and calculating a three dimensional (3D) angular representation from each of the optical flow vectors. The method further includes generating a classification feature for each of the image cells based on a histogram of the 3D angular representations of the pixels in that image cell. The classification features are then provided to a recognition classifier configured to recognize the type of activity depicted in the video sequence, based on the generated classification features.
Recognition of activity in a video image sequence using depth information
Techniques are provided for recognition of activity in a sequence of video image frames that include depth information. A methodology embodying the techniques includes segmenting each of the received image frames into a multiple windows and generating spatio-temporal image cells from groupings of windows from a selected sub-sequence of the frames. The method also includes calculating a four dimensional (4D) optical flow vector for each of the pixels of each of the image cells and calculating a three dimensional (3D) angular representation from each of the optical flow vectors. The method further includes generating a classification feature for each of the image cells based on a histogram of the 3D angular representations of the pixels in that image cell. The classification features are then provided to a recognition classifier configured to recognize the type of activity depicted in the video sequence, based on the generated classification features.
Deposit detection device and deposit detection method
A deposit detection device according to an embodiment includes an adhesion detection module, a moving adhesion detection module, and a determination module. The adhesion detection module detects a deposit region corresponding to a deposit adhering to an imaging device, based on brightness information of an image captured by the imaging device. The moving adhesion detection module detects the deposit region detected during moving of the vehicle as a moving deposit region, from among the deposit regions detected by the adhesion detection module. When the area of the moving deposit region detected by the moving adhesion detection module is equal to or larger than a first threshold value, the determination module determines that there is a deposit.
Deposit detection device and deposit detection method
A deposit detection device according to an embodiment includes an adhesion detection module, a moving adhesion detection module, and a determination module. The adhesion detection module detects a deposit region corresponding to a deposit adhering to an imaging device, based on brightness information of an image captured by the imaging device. The moving adhesion detection module detects the deposit region detected during moving of the vehicle as a moving deposit region, from among the deposit regions detected by the adhesion detection module. When the area of the moving deposit region detected by the moving adhesion detection module is equal to or larger than a first threshold value, the determination module determines that there is a deposit.
Heatmap and atlas
A dynamic anatomic atlas is disclosed, comprising static atlas data describing atlas segments and dynamic atlas data comprising information on a dynamic property which information is respectively linked to the atlas segments.
Heatmap and atlas
A dynamic anatomic atlas is disclosed, comprising static atlas data describing atlas segments and dynamic atlas data comprising information on a dynamic property which information is respectively linked to the atlas segments.
METHOD AND APPARATUS FOR GENERATING VIDEO WITH 3D EFFECT, METHOD AND APPARATUS FOR PLAYING VIDEO WITH 3D EFFECT, AND DEVICE
A method and an apparatus for generating a video with a three-dimensional (3D) effect, a method and an apparatus for playing a video with a 3D effect, and a device are provided. The method includes: obtaining an original video; segmenting at least one frame of raw image of the original video to obtain a foreground image sequence including a moving object, the foreground image sequence including at least one frame of foreground image; determining, based on the foreground image sequence, a target raw image in which a target occlusion image is to be placed and an occlusion method of the target occlusion image in the target raw image; adding the target occlusion image to the target raw image based on the occlusion method to obtain a final image; and generating a target video with a 3D effect based on the final image and the original video.
METHOD AND APPARATUS FOR GENERATING VIDEO WITH 3D EFFECT, METHOD AND APPARATUS FOR PLAYING VIDEO WITH 3D EFFECT, AND DEVICE
A method and an apparatus for generating a video with a three-dimensional (3D) effect, a method and an apparatus for playing a video with a 3D effect, and a device are provided. The method includes: obtaining an original video; segmenting at least one frame of raw image of the original video to obtain a foreground image sequence including a moving object, the foreground image sequence including at least one frame of foreground image; determining, based on the foreground image sequence, a target raw image in which a target occlusion image is to be placed and an occlusion method of the target occlusion image in the target raw image; adding the target occlusion image to the target raw image based on the occlusion method to obtain a final image; and generating a target video with a 3D effect based on the final image and the original video.