Patent classifications
G06T3/10
Image matching apparatus
An image matching apparatus matching a first image against a second image includes an acquiring unit, a generating unit, and a determining unit. The acquiring unit acquires a frequency feature of the first image and a frequency feature of the second image. The generating unit synthesizes the frequency feature of the first image and the frequency feature of the second image, and generates a quantized synthesized frequency feature in which a value of an element is represented by a binary value or a ternary value. The determining unit calculates a score indicating a degree to which the quantized synthesized frequency feature is a square wave having a single period, and matches the first image against the second image based on the score.
Systems and methods for automatically annotating images
In some embodiments, apparatuses and methods are provided herein useful to automatically annotating images. In some embodiments, a system for automatically annotating images comprises a database, wherein the database is configured to store images and annotations for the images and a control circuit, wherein the control circuit is communicatively coupled to the database, and wherein the control circuit is configured to retrieve, from the database, an image, generate, based on the image, a collection of augmented images, generate segmentation maps for each image in the collection of augmented images, wherein each of the segmentation maps include segments, select, based on a threshold, ones of the segments above a threshold, merge the ones of the segments above the threshold to create a segmented image, and generate, for each segment of the segmented image, classifications, wherein an annotation for the image includes the segmented images and the classifications.
Systems and methods for automatically annotating images
In some embodiments, apparatuses and methods are provided herein useful to automatically annotating images. In some embodiments, a system for automatically annotating images comprises a database, wherein the database is configured to store images and annotations for the images and a control circuit, wherein the control circuit is communicatively coupled to the database, and wherein the control circuit is configured to retrieve, from the database, an image, generate, based on the image, a collection of augmented images, generate segmentation maps for each image in the collection of augmented images, wherein each of the segmentation maps include segments, select, based on a threshold, ones of the segments above a threshold, merge the ones of the segments above the threshold to create a segmented image, and generate, for each segment of the segmented image, classifications, wherein an annotation for the image includes the segmented images and the classifications.
Panoramic stitching method, apparatus, and storage medium
The present disclosure discloses a panoramic stitching method, an apparatus, and a storage medium. A transformation matrix obtaining method includes: obtaining motion data detected by sensors, wherein the sensors are disposed on a probe used to collect images, and the motion data is used to represent a moving trend of the probe during image collection; inputting the motion data into a pre-trained neural network, to calculate matrix parameters by using the neural network; calculating a transformation matrix by using the matrix parameters, wherein the transformation matrix is used to stitch images collected by the probe, to obtain a panoramic image. In the present disclosure, the transformation matrix can be calculated and the images can be stitched without using characteristics of the images, and factors such as brightness and the characteristics of the images do not impose an impact, thereby improving transformation matrix calculation accuracy, and improving an image stitching effect.
METHODS AND SYSTEMS FOR PROCESSING IMAGES TO PERFORM AUTOMATIC ALIGNMENT OF ELECTRONIC IMAGES
Systems and methods are disclosed for aligning a two-dimensional (2D) design image to a 2D projection image of a three-dimensional (3D) design model. One method comprises receiving a 2D design document, the 2D design document comprising a 2D design image, and receiving a 3D design file comprising a 3D design model, the 3D design model comprising one or more design elements. The method further comprises generating a 2D projection image based on the 3D design model, the 2D projection image comprising a representation of at least a portion of the one or more design elements, generating a projection barcode based on the 2D projection image, and generating a drawing barcode based on the 2D design image. The method further comprises aligning the 2D projection image and the 2D design image by comparing the projection barcode and the drawing barcode.
METHODS AND SYSTEMS FOR PROCESSING IMAGES TO PERFORM AUTOMATIC ALIGNMENT OF ELECTRONIC IMAGES
Systems and methods are disclosed for aligning a two-dimensional (2D) design image to a 2D projection image of a three-dimensional (3D) design model. One method comprises receiving a 2D design document, the 2D design document comprising a 2D design image, and receiving a 3D design file comprising a 3D design model, the 3D design model comprising one or more design elements. The method further comprises generating a 2D projection image based on the 3D design model, the 2D projection image comprising a representation of at least a portion of the one or more design elements, generating a projection barcode based on the 2D projection image, and generating a drawing barcode based on the 2D design image. The method further comprises aligning the 2D projection image and the 2D design image by comparing the projection barcode and the drawing barcode.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
The image processing device 1X includes an acquisition means 31X, a selection means 32X, and a determination means 33X. The acquisition means 31X is configured to acquire data obtained by applying Fourier transform to an endoscopic image of an examination target photographed by a photographing unit provided in an endoscope. The selection means 32X is configured to select partial data that is a part of the data. The determination means 33X is configured to make a determination regarding an attention point to be noticed in the examination target based on the partial data. It can be used for assisting user's decision making,
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
The image processing device 1X includes an acquisition means 31X, a selection means 32X, and a determination means 33X. The acquisition means 31X is configured to acquire data obtained by applying Fourier transform to an endoscopic image of an examination target photographed by a photographing unit provided in an endoscope. The selection means 32X is configured to select partial data that is a part of the data. The determination means 33X is configured to make a determination regarding an attention point to be noticed in the examination target based on the partial data. It can be used for assisting user's decision making,
Method for converting landscape video to portrait mobile layout using a selection interface
Described herein are systems and methods of converting media dimensions. A device may identify a set of frames from a video in a first orientation as belonging to a scene. The device may receive a selected coordinate on a frame of the set of frames for the scene. The device may identify a first region within the frame including a first feature corresponding to the selected coordinate and a second region within the frame including a second feature. The device may generate a first score for the first feature and a second score for the second feature. The first score may be greater than the second score based on the first feature corresponding to the selected coordinate. The device may crop the frame to include the first region and the second region within a predetermined display area comprising a subset of regions of the frame in a second orientation.
Method for converting landscape video to portrait mobile layout using a selection interface
Described herein are systems and methods of converting media dimensions. A device may identify a set of frames from a video in a first orientation as belonging to a scene. The device may receive a selected coordinate on a frame of the set of frames for the scene. The device may identify a first region within the frame including a first feature corresponding to the selected coordinate and a second region within the frame including a second feature. The device may generate a first score for the first feature and a second score for the second feature. The first score may be greater than the second score based on the first feature corresponding to the selected coordinate. The device may crop the frame to include the first region and the second region within a predetermined display area comprising a subset of regions of the frame in a second orientation.