Patent classifications
G06V10/74
SYSTEMS AND METHODS FOR DETERMINING ROAD TRAVERSABILITY USING REAL TIME DATA AND A TRAINED MODEL
Embodiments of the disclosed systems and methods provide for determination of roadway traversability by an autonomous vehicle using real time data and a trained traversability determination machine learning model. Consistent with aspects of the disclosed embodiments, the model may be trained using annotated birds eye view perspective data obtained using vehicle vision sensor systems (e.g., LiDAR and/or camera systems). During operation of a vehicle, vision sensor data may be used to construct birds eye view perspective data, which may be provided to the trained model. The model may label and/or otherwise annotate the vision sensor data based on relationships identified in the model training process to identify associated road boundary and/or lane information. Local vehicle control systems may compute control actions and issue commands to associated vehicle control systems to ensure the vehicle travels within a desired path.
METHOD AND APPARATUS FOR MEASURING MOTILITY OF CILIATED CELLS IN RESPIRATORY TRACT
The present disclosure relates to a method and an apparatus for measuring motility of ciliated cells in a respiratory tract. The method includes the operations of: acquiring image data including a plurality of frames of respiratory tract organoids; identifying positions of ciliated cells by performing motion-contrast imaging on the image data; when a region of interest (ROI) related to the position of the ciliated cells is selected, measuring a ciliary beat frequency (CBF) related to motility of cilia included in the selected region of interest using cross-correlation between the plurality of frames; and expressing the cilia included in the region of interest in a preset display method on the basis of the range of the measured ciliary beat frequency.
CALCULATING A DISTANCE BETWEEN A VEHICLE AND OBJECTS
A method for calculating a distance between a vehicle camera and an object, the method may include: (a) obtaining an image that was acquired by the vehicle camera of a vehicle; the image captures the horizon, the object, and road lane boundaries; (b) determining an initial row-location horizon estimate and a row-location contact point estimate, the contact point is between the object and a road on which the vehicle is positioned; (c) determining a vehicle camera roll angle correction that once applied will cause the lanes boundaries to be parallel to each other in the real world; (d) calculating a new row-location horizon estimate, wherein the calculating comprises updating the row-location horizon estimate based on the vehicle camera roll angle correction; and (e) calculating the distance between the vehicle camera based on a difference between the new row-location horizon estimate and the row-location contact point estimate.
DATA OBTAINING METHOD AND APPARATUS
A first frame of time of flight (TOF) data including projection off data and infrared data is obtained, and after determining that a data block satisfying that a number of data points with values greater than a first threshold is greater than a second threshold is present in the infrared data, TOF data for generating a first frame of a TOF image is obtained based on a difference between the infrared data and the projection off data. Because the data block satisfying the number of data points with values greater than the first threshold is greater than the second threshold is an overexposed data block, and the projection off data is TOF data acquired by a TOF camera with a TOF light source being off, the difference between the infrared data and the projection off data can correct the overexposure, improving quality of the first frame of the TOF image.
Workpiece image search apparatus and workpiece image search method
A workpiece image search apparatus includes: a workpiece image deformation unit that generates a third workpiece image by deforming a second workpiece image so that a difference in workpiece shape between a first workpiece image and the second workpiece image becomes smaller, wherein the first workpiece image is obtained by projecting a first workpiece shape of a first workpiece on a two-dimensional plane, and the second workpiece image is obtained by projecting a second workpiece shape of a second workpiece on a two-dimensional plane; and a similarity calculation unit that calculates a similarity between the first workpiece shape and the second workpiece shape by comparing the third workpiece image with the first workpiece image.
Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.
IMAGE PROCESSING SYSTEM AND METHOD
There is provided an image processing system and method for identifying a user. The system comprises a processor configured to identify a first user in an image, determine a plurality of characteristic vectors associated with the first user, compare the characteristic vectors associated with the first user with a plurality of predetermined characteristic vectors associated with a plurality of users including the first user, and identify the first user based on the comparison.
INFORMATION PROCESSING METHOD AND ELECTRONIC DEVICE
An information processing method and an electronic device are provided. The method is performed by a first wearable device, where the first wearable device includes a first image collector, and the method includes: obtaining a second face image by the first image collector and receiving a first face image from the second wearable device in a case that the first wearable device and a second wearable device are in a preset positional relationship; and processing first target information from the second wearable device in a case that the first face image matches the second face image.
VIDEO PROCESSING METHOD, VIDEO SEARCHING METHOD, TERMINAL DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A video processing method, comprising: according to the scenario, editing a video to be edited, and obtaining a target video (S100); acquiring feature parameters of the target video (S200); generating, according to the feature parameters, a keyword of the target video (S300); and associatively storing the keyword and the target video (S400).
IMAGE DISPLAY METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
An image display method includes: obtaining a homography matrix between a first target plane and a second target plane according to the first target plane and the second target plane; obtaining a target displacement according to the homography matrix and an attitude; obtaining a target pose according to the target displacement, the target pose including a position and an attitude of a camera coordinate system of a current frame image in a world coordinate system; and displaying an AR image according to the target pose.