Patent classifications
H04N13/10
Video frame processing
An apparatus for or processing video frames is presented. The apparatus comprises at least one processing unit and at least one memory. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to segment objects appearing in video frames, model movement of a camera recording the video frames, and individually compensate motion artefacts in the video frames for at least one segmented object based on relative movement of the camera and at least one segmented object.
Augmented Three Dimensional Point Collection of Vertical Structures
Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.
Augmented Three Dimensional Point Collection of Vertical Structures
Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.
Image processing device, display device, control method for image processing device, and control program
An image from which a three-dimensional sensation that is close to the actual viewing in the natural world is acquired is generated with simple processing. A controller of an image processing device includes: an attention area specifying unit that specifies an attention area out of a plurality of areas formed by dividing an image to be displayed by a display in the vertical direction thereof on a basis of the fixation point detected by a fixation point detecting sensor; and an image processor that generates a post-processing image by performing emphasis processing for at least a part of the attention area with respect to the image.
Image processing device, display device, control method for image processing device, and control program
An image from which a three-dimensional sensation that is close to the actual viewing in the natural world is acquired is generated with simple processing. A controller of an image processing device includes: an attention area specifying unit that specifies an attention area out of a plurality of areas formed by dividing an image to be displayed by a display in the vertical direction thereof on a basis of the fixation point detected by a fixation point detecting sensor; and an image processor that generates a post-processing image by performing emphasis processing for at least a part of the attention area with respect to the image.
IMAGING APPARATUS CAPABLE OF SWITCHING DISPLAY METHODS
An imaging apparatus comprises an image pickup unit, a cutout image generation unit for cutting out a specified area in a pickup image taken by the image pickup unit to generate a cutout image enlarged at a specified magnification, an image display unit for displaying one or both of the pickup image taken by the image pickup unit and the cutout image generated by the cutout image generation unit, a display image control unit for controlling a method of displaying an image the image display unit displays, a manual focus operation unit for the user to control through manual operation the focus position of the image pickup unit, and a manual zoom operation unit for the user to control the zoom magnification of the image pickup unit.
MAPPING OF SPHERICAL IMAGE DATA INTO RECTANGULAR FACES FOR TRANSPORT AND DECODING ACROSS NETWORKS
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.
MAPPING OF SPHERICAL IMAGE DATA INTO RECTANGULAR FACES FOR TRANSPORT AND DECODING ACROSS NETWORKS
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.
METHOD FOR DETERMINING DISPARITY OF IMAGES CAPTURED MULTI-BASELINE STEREO CAMERA AND APPARATUS FOR THE SAME
Disclosed is a method of determining a disparity of an image generated by using a multibaseline stereo camera system. The method includes determining a reference parity between a reference image and a target image among multiple images generated by using a multi-baseline stereo camera system, determining an ambiguity region in each of the multiple images on the basis of a positional relationship among the multiple images or among cameras in the multibaseline stereo camera system, and determining a disparity for each of the multiple images by determining a matching point in each of the ambiguity regions of the respective images.
METHOD FOR DETERMINING DISPARITY OF IMAGES CAPTURED MULTI-BASELINE STEREO CAMERA AND APPARATUS FOR THE SAME
Disclosed is a method of determining a disparity of an image generated by using a multibaseline stereo camera system. The method includes determining a reference parity between a reference image and a target image among multiple images generated by using a multi-baseline stereo camera system, determining an ambiguity region in each of the multiple images on the basis of a positional relationship among the multiple images or among cameras in the multibaseline stereo camera system, and determining a disparity for each of the multiple images by determining a matching point in each of the ambiguity regions of the respective images.