Patent classifications
G06T2207/20061
SYSTEMS AND METHODS FOR UNIVERSAL TIRE DRESSING APPLICATION
Systems and methods disclosed are directed to receiving an image; performing an analysis on the image via one or more algorithms; generating tire and rim data based on an outcome of the analysis; transmitting one or more instructions including actuation of an end effector that is configured to apply tire dressing to a tire; and actuating an end effector that is configured to apply the tire dressing to the tire in accordance with the one or more instructions.
DISTANCE MEASUREMENT METHOD, DISTANCE MEASUREMENT APPARATUS, AND COMPUTER-PROGRAM PRODUCT
A distance measurement method is provided. The distance measurement method includes obtaining an image of a target device, the image of the target device including a target object of a first color and a background object of a second color, the first color being different from the second color; processing the image of the target device to obtain a processed image, the processed image including a processed target object of a third color and a processed background object of a fourth color; detecting a contour of the processed target object of the third color; calculating an area encircled by the contour; and calculating a distance between a camera and the target device upon determination of the area encircled by the contour.
SYSTEMS, METHODS, AND DEVICES FOR AUTOMATED METER READING FOR SMART FIELD PATROL
Methods, systems, and devices for equipment reading in a factory or plant environment are described, including: capturing an image of an environment including a measurement device; detecting a target region included in the image, the target region including at least a portion of the measurement device; determining identification information associated with the measurement device based on detecting the target region; and extracting measurement information associated with the measurement device based on detecting the target region. In some aspects, detecting the target region may include: providing the image to a machine learning network; and receiving an output from the machine learning network in response to the machine learning network processing the image based on a detection model, the output including the target region.
Method and device for the characterization of living specimens from a distance
A method and a device for the characterization of living specimens from a distance are disclosed. The method comprises: acquiring an image of a living specimen and segmenting the image, providing a segmented image; measuring a distance to several parts of said image, providing several distance measurements, and selecting a subset of those contained in the segmented image. The method also processes the segmented image and the distance measurements referred to different positions contained within the segmented image by characterizing the shape and the depth of the living specimen and by comparing a shape analysis map and a depth profile analysis map. If a result of the comparison is comprised inside a given range, parameters of the living specimen are further determined including posture parameters, location or correction of anatomical reference points and/or body size parameters, and/or a body map of the living specimen is represented.
TECHNOLOGY CONFIGURED TO ENABLE FAULT DETECTION AND CONDITION ASSESSMENT OF UNDERGROUND STORMWATER AND SEWER PIPES
The present disclosure relates to technology configured to enable fault detection and condition assessment of underground stormwater and sewer pipes. Embodiments of the present disclosure have been developed to allow automated processing of video captured by pipe inspection robots and the like thereby to identify and categorize artefacts in pipes.
DETECTOR FOR OBJECT RECOGNITION
A detector for object recognition includes an illumination source for projecting an illumination pattern on an area including at least one object; an optical sensor having a light-sensitive area and configured for determining a first image including a two-dimensional image of the area, and a second image including a plurality of reflection features generated in response to illumination, each reflection feature including a beam profile; an evaluation device for determining beam profile information for each reflection feature by analyzing their beam profiles, determining a three-dimensional image using the determined beam profile information, identifying the reflection features located inside and/or outside an image region, determining a depth level from the beam profile information of the reflection features located inside and/or outside of the image region, determining a material property of the object from the beam profile information, and determining a position and/or orientation of the object.
Edge detection method and device, electronic equipment, and computer-readable storage medium
The invention provides an edge detection method and a device of an object in an image, an electronic equipment, and a computer-readable storage medium. The method includes: a line drawing of a grayscale contour in the image is obtained; similar lines in the line drawing are merged to obtain initial merged lines, and a boundary matrix is determined according to the initial merged lines; similar lines in the initial merged lines are merged to obtain target lines, and unmerged initial merged lines are also used as target lines; reference boundary lines are determined from the target lines according to the boundary matrix; boundary line regions of the object in the image are obtained; a target boundary line corresponding to the boundary line region is determined from the reference boundary lines; an edge of the object in the image is determined according to the determined target boundary lines.
Image-based inventory estimation
In an approach to estimating product inventory count, one or more computer processors receive one or more images of one or more products residing on a product storage location from an image capturing device. Based on the received images, one or more computer processors determine a count of the one or more products. One or more computer processors determine a confidence in the count of the one or more products. In response to determining the confidence is below a threshold, one or more computer processors calculate a recommended position of the image capturing device to produce an improved image of the one or more products. One or more computer processors transmit instructions to the image capturing device to move to the recommended position. One or more computer processors determine whether the image capturing device is in the recommended position.
Artificial intelligence using convolutional neural network with Hough transform
Artificial intelligence using convolutional neural network with Hough Transform. In an embodiment, a convolutional neural network (CNN) comprises convolution layers, a Hough Transform (HT) layer, and a Transposed Hough Transform (THT) layer, arranged such that at least one convolution layer precedes the HT layer, at least one convolution layer is between the HT and THT layers, and at least one convolution layer follows the THT layer. The HT layer converts its input from a first space into a second space, and the THT layer converts its input from the second space into the first space. The CNN may be applied to an input image to perform semantic image segmentation, so as to produce an output image representing a result of the semantic image segmentation.
CAMERA MIRROR SYSTEM DISPLAY FOR COMMERCIAL VEHICLES INCLUDING SYSTEM FOR IDENTIFYING ROAD MARKINGS
A process for identifying a road feature in an image includes receiving an image at a controller, identifying a region of interest within the image and converting the region of interest from red-green-blue (RGB) to a single color using the controller. A set edges is detected within the region of interest, and at least one line within the set of edges is identified using the controller. The at least one line is compared with a set of known and expected road marking features, and the set of at least one first line in the at least one line is identified as corresponding to a road feature in response to the at least the first line matching the set of known and expected road marking features.