G06V10/457

SYSTEM AND METHODS FOR GENERATING A 3D MODEL OF A PATHOLOGY SAMPLE
20220383584 · 2022-12-01 ·

A system and a method for generating a combined 3D model (95) of a sample comprising a sample imaging system (1) configured to generate a first 3D model (25) of the sample, a slice imaging system (2) configured to generate a second 3D model (615) of the sample, and a combiner engine (90) configured to generate a combined 3D model (95) based on the first 3D model and the second 3D model of the sample.

DEVICE AND METHOD FOR DETECTING GUIDEWIRE BASED ON CURVE SIMILARITY
20220378512 · 2022-12-01 · ·

A method for determining a similarity between curves performed by an electronic device includes extracting a candidate curve corresponding to at least a part of a blood vessel and a source curve corresponding to a guidewire from a blood vessel image, sampling the same sampling number of points from each of the candidate curve and the source curve, calculating a similarity level between the candidate curve and the source curve based on the points sampled from the candidate curve and the points sampled from the source curve, and determining whether the candidate curve and the source curve are similar, based on the calculated similarity level.

OBJECT ASSOCIATION METHOD AND APPARATUS AND ELECTRONIC DEVICE

The present disclosure provides an object association method and apparatus, and an electronic device, which relate to the technical field of maps. A specific implementation solution is: when performing object association, extracting first description information of each of a plurality of first objects from real data, and extracting second description information of each of a plurality of second objects from high-definition map data; and determining, according to the first description information and the second description information, association probabilities between the first objects and the second objects; then determining, according to the association probabilities between the first objects and the second objects, an association result of the first objects and the second objects, thus realizing automatic associations between objects in real world and objects in a high-definition map, and improving an association efficiency of objects.

METHOD AND APPARATUS FOR ANALYZING AN IMAGE OF A MICROLITHOGRAPHIC MICROSTRUCTURED COMPONENT
20220383485 · 2022-12-01 ·

The invention relates to a method and to an apparatus for analyzing an image of a microlithographic microstructured component wherein in the image each of a multiplicity of pixels is assigned in each case an intensity value. A method according to the invention comprises the following steps: isolating a plurality of edge fragments in the image;

classifying each of the isolated edge fragments either as a relevant edge fragment or as an irrelevant edge fragment; and ascertaining contiguous segments in the image based on the relevant edge fragments.

MEASUREMENT METHOD AND MEASUREMENT DEVICE
20220375066 · 2022-11-24 ·

A measurement method, which is performed by a measurement device that measures a displacement of a target object, includes: receiving designation of a designation point on a first image that includes the target object; setting a plurality of set points, based on the designation point; identifying a direction of a line that connects two of the plurality of set points; generating a second image by rotating the first image, the second image being an image in which the identified direction of the line is horizontal or vertical; setting, in the second image, a measurement region that partially includes the line; and measuring the displacement of the target object in the measurement region, the displacement being a displacement in a direction orthogonal to the line.

Parking space detection method and apparatus, electronic device, vehicle and storage medium

Embodiments of the present disclosure disclose a parking space detection method and apparatus, an electronic equipment, a vehicle, and a storage medium, which relate to the field of automatic driving technologies and in particular, to the field of autonomous parking, including: collecting ultrasonic information during a moving process of a vehicle, generating a target grid map, performing feature recognition on the target grid map to obtain a line segment, and generating a parking space according to the line segment and the target grid map. By implementing the present disclosure, a disadvantage of a limited range of application in the prior art caused by that the detection of a parking space requires a travelling direction of a vehicle to be parallel with a side of an obstacle and requires the vehicle to be close to the obstacle is avoided, thereby achieving a relatively wide use and improving detection accuracy.

Identifying location of shreds on an imaged form

Disclosed herein is a machine learning application for automatically reading filled-in forms. There are multiple steps involved in using a computer to accurately read a handwritten form. First, the system identifies the form. Second, the system identifies what parts of the form are important. Third, the important parts are extracted as image data (known as shreds). Finally, fourth, the system interprets the shreds. This application is focused on steps two and three of that overall process. The disclosed techniques relate to training a machine learning system on a given series of forms such that when provided future filled-in forms within that series, the system is able to extract the portions of the filled-in form that are important/relevant.

Method, apparatus, system, and storage medium for calibrating exterior parameter of on-board camera

The present application discloses a method, an apparatus, a system, and a storage medium for calibrating an exterior parameter of an on-board camera, relating to the field of autonomous driving technologies. A specific implementation scheme of the method in the application is: preprocessing two frames of images of a former frame and a latter frame collected by the on-board camera; performing feature point matching on the two preprocessed frames of images to obtain matched feature points; determining a moving posture of the on-board camera according to the matched feature points; determining a conversion relationship between a vehicle coordinate system and an on-board camera coordinate system according to the moving posture, and obtaining the external parameter of the on-board camera relative to a vehicle body.

ADAPTIVE AUTO-SEGMENTATION IN COMPUTED TOMOGRAPHY

A computer-implemented method of segmenting a reconstructed volume of a region of patient anatomy includes: determining an anatomical region associated with the reconstructed volume; detecting one or more metal objects disposed in an initial 3D metal object mask associated with the reconstructed volume; for each of the one or more metal objects disposed in the initial 3D metal object mask, determining a volume associated with the metal object; determining a value for at least one segmentation parameter based on the anatomical region and on the volume associated with the one or more metal objects; and generating a final 3D metal object mask associated with the reconstructed digital volume using the value for the segmentation parameter.

SCREEN RESPONSE VALIDATION OF ROBOT EXECUTION FOR ROBOTIC PROCESS AUTOMATION
20230032195 · 2023-02-02 · ·

Screen response validation of robot execution for robotic process automation (RPA) is disclosed. Whether text, screen changes, images, and/or other expected visual actions occur in an application executing on a computing system that an RPA robot is interacting with may be recognized. Where the robot has been typing may be determined and the physical position on the screen based on the current resolution of where one or more characters, images, windows, etc. appeared may be provided. The physical position of these elements, or the lack thereof, may allow determination of which field(s) the robot is typing in and what the associated application is for the purpose of validation that the application and computing system are responding as intended. When the expected screen changes do not occur, the robot can stop and throw an exception, go back and attempt the intended interaction again, restart the workflow, or take another suitable action.