Patent classifications
G06V10/761
Image Processing Method and Apparatus, Computer Device, Storage Medium, and Program Product
Methods and apparatuses for image processing are provided. A first image belonging to a first image domain is acquired and input to an image processing model to be trained to obtain a second image belonging to a second image domain. A first correlation degree between an image feature of the first image and an image feature of the second image to obtain a target feature correlation degree is calculated. A second correlation degree between feature value distribution of the image feature of the first image and feature value distribution of the image feature of the second image is calculated to obtain a distribution correlation degree. Model parameters of an image processing model are adjusted to a direction in which the target feature correlation degree is increased and a direction in which the distribution correlation degree is increased to obtain a trained image processing.
IMAGE REGISTRATION METHOD AND ELECTRONIC DEVICE
An image registration method includes: acquiring a target image comprising a target object; inputting the target image to a preset network model, and outputting position information and rotation angle information of the target object; obtaining a reference image comprising the target object by querying a preset image database according to the position information and the rotation angle information; and performing image registration on the target image and the reference image to obtain a corresponding position of the target object of the target image in the reference image.
METHOD AND APPARATUS FOR DETECTION AND TRACKING, AND STORAGE MEDIUM
In the field of video processing, a detection and tracking method and apparatus, and a storage medium, are provided. The method includes: performing feature point analysis on a video frame sequence, to obtain feature points on each video frame thereof; performing target detection on an extracted frame through a first thread based on the feature points, to obtain a target box in the extracted frame; performing target box tracking in a current frame through a second thread based on the feature points and the target box in the extracted frame, to obtain a result target box in the current frame; and outputting the result target box. As the target detection and the target tracking are divided into two threads, a tracking frame rate is unaffected by a detection algorithm, and the target box of the video frame can be outputted in real time, improving real-time performance and stability.
POINT CLOUD DATA ENCODING METHOD AND DECODING METHOD, DEVICE, MEDIUM, AND PROGRAM PRODUCT
A point cloud data encoding method and decoding method, a device, a medium, and a program product are provided, and relate to the field of point cloud application technologies. One method includes obtaining point cloud data, the point cloud data comprising at least two data points; and sequentially encoding data points in the point cloud data according to encoding orders of the data points, to obtain encoded point cloud data corresponding to the point cloud data, wherein the encoding orders of the data points being determined based on distances among the data points. Another method includes obtaining encoded point cloud data, obtaining reference information, the reference information being used for indicating a start reference data point of an encoding queue; and sequentially decoding, based on the reference information and the encoded point cloud data, data points according to the encoding orders of the data points.
METHOD AND APPARATUS FOR MEASURING MOTILITY OF CILIATED CELLS IN RESPIRATORY TRACT
The present disclosure relates to a method and an apparatus for measuring motility of ciliated cells in a respiratory tract. The method includes the operations of: acquiring image data including a plurality of frames of respiratory tract organoids; identifying positions of ciliated cells by performing motion-contrast imaging on the image data; when a region of interest (ROI) related to the position of the ciliated cells is selected, measuring a ciliary beat frequency (CBF) related to motility of cilia included in the selected region of interest using cross-correlation between the plurality of frames; and expressing the cilia included in the region of interest in a preset display method on the basis of the range of the measured ciliary beat frequency.
CALCULATING A DISTANCE BETWEEN A VEHICLE AND OBJECTS
A method for calculating a distance between a vehicle camera and an object, the method may include: (a) obtaining an image that was acquired by the vehicle camera of a vehicle; the image captures the horizon, the object, and road lane boundaries; (b) determining an initial row-location horizon estimate and a row-location contact point estimate, the contact point is between the object and a road on which the vehicle is positioned; (c) determining a vehicle camera roll angle correction that once applied will cause the lanes boundaries to be parallel to each other in the real world; (d) calculating a new row-location horizon estimate, wherein the calculating comprises updating the row-location horizon estimate based on the vehicle camera roll angle correction; and (e) calculating the distance between the vehicle camera based on a difference between the new row-location horizon estimate and the row-location contact point estimate.
DATA OBTAINING METHOD AND APPARATUS
A first frame of time of flight (TOF) data including projection off data and infrared data is obtained, and after determining that a data block satisfying that a number of data points with values greater than a first threshold is greater than a second threshold is present in the infrared data, TOF data for generating a first frame of a TOF image is obtained based on a difference between the infrared data and the projection off data. Because the data block satisfying the number of data points with values greater than the first threshold is greater than the second threshold is an overexposed data block, and the projection off data is TOF data acquired by a TOF camera with a TOF light source being off, the difference between the infrared data and the projection off data can correct the overexposure, improving quality of the first frame of the TOF image.
Workpiece image search apparatus and workpiece image search method
A workpiece image search apparatus includes: a workpiece image deformation unit that generates a third workpiece image by deforming a second workpiece image so that a difference in workpiece shape between a first workpiece image and the second workpiece image becomes smaller, wherein the first workpiece image is obtained by projecting a first workpiece shape of a first workpiece on a two-dimensional plane, and the second workpiece image is obtained by projecting a second workpiece shape of a second workpiece on a two-dimensional plane; and a similarity calculation unit that calculates a similarity between the first workpiece shape and the second workpiece shape by comparing the third workpiece image with the first workpiece image.
Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.
Method, apparatus, and system for determining polyline homogeneity
An approach is provided for an asymmetric evaluation of polygon similarity. The approach, for instance, involves receiving a first polygon representing an object depicted in an image. The approach also involves generating a transformation of the image comprising image elements whose values are based on a respective distance that each image element is from a nearest image element located on a first boundary of the first polygon. The approach further involves determining a subset of the plurality of image elements of the transformation that intersect with a second boundary of a second polygon. The approach further involves calculating a polygon similarity of the second polygon with respect the first polygon based on the values of the subset of image elements normalized to a length of the second boundary of the second polygon.