G06V10/426

IDENTIFICATION OF A VEHICLE HAVING VARIOUS DISASSEMBLY STATES

Aspects of the present disclosure relate to a method of identifying a vehicle, and a system thereof. The method can include receiving a first image of a vehicle from a first camera and classifying the vehicle in the first image with a vehicle class label. The method can also include determining a first vehicle fingerprint for the vehicle. The method can also include detecting any changes in the first vehicle fingerprint and the vehicle class label after a first time period. The detected changes in the first vehicle fingerprint can correspond to a disassembly state of the vehicle. The method can also include performing, if the vehicle class label is unchanged, at least one action in response to detected changes in the first vehicle fingerprint.

Intelligent recognition and extraction of numerical data from non-numerical graphical representations

Embodiments of the invention are directed to systems, methods, and computer program products for a unique platform for analyzing, classifying, extracting, and processing information from graphical representations. Embodiments of the inventions are configured to provide an end to end automated solution for extracting data from graphical representations and creating a centralized database for providing graphical attributes, image skeletons, and other metadata information integrated with a graphical representation classification training layer. The invention is designed to receive a graphical representation for analysis, intelligently identify and extract objects and data in the graphical representation, and store the data attributes of the graphical representation in an accessible format in an automated fashion.

Intelligent recognition and extraction of numerical data from non-numerical graphical representations

Embodiments of the invention are directed to systems, methods, and computer program products for a unique platform for analyzing, classifying, extracting, and processing information from graphical representations. Embodiments of the inventions are configured to provide an end to end automated solution for extracting data from graphical representations and creating a centralized database for providing graphical attributes, image skeletons, and other metadata information integrated with a graphical representation classification training layer. The invention is designed to receive a graphical representation for analysis, intelligently identify and extract objects and data in the graphical representation, and store the data attributes of the graphical representation in an accessible format in an automated fashion.

Determining an item that has confirmed characteristics

In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.

Determining an item that has confirmed characteristics

In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.

Identifying image aesthetics using region composition graphs

The disclosed computer-implemented method may include generating a three-dimensional (3D) feature map for a digital image using a fully convolutional network (FCN). The 3D feature map may be configured to identify features of the digital image and identify an image region for each identified feature. The method may also include generating a region composition graph that includes the identified features and image regions. The region composition graph may be configured to model mutual dependencies between features of the 3D feature map. The method may further include performing a graph convolution on the region composition graph to determine a feature aesthetic value for each node according to the weightings in the node's weighted connecting segments, and calculating a weighted average for each node's feature aesthetic value to provide a combined level of aesthetic appeal for the digital image. Various other methods, systems, and computer-readable media are also disclosed.

METHOD AND APPARATUS FOR MATCHING 3D POINT CLOUD USING A LOCAL GRAPH

A device that executes a program stored in the memory to perform generating a local graph for the 3D point cloud based on distances and angles between 3D points in the 3D point cloud; matching the local graph using a similarity function and determining a feature matching pair in a matched local graph; and estimating a rigid body transformation matrix using the feature matching pair is provided.

METHOD AND SYSTEM FOR CLASSIFYING FACES OF BOUNDARY REPRESENTATION (B-REP) MODELS USING ARTIFICIAL INTELLIGENCE

The invention relates to method and system for classifying faces of a Boundary Representation (B-Rep) model using Artificial Intelligence (AI). The method includes extracting topological information corresponding to each of a plurality of data points of a B-Rep model of a product; determining a set of parameters based on the topological information corresponding to each of the plurality of data points; transforming the set of parameters corresponding to each of the plurality of data points of the B-Rep model into a tabular format to obtain a parametric data table; and assigning each of the plurality of faces of the B-Rep model a category from a plurality of categories based on the parametric data table using an AI model

Lane detection and tracking techniques for imaging systems

A method for tracking a lane on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes generating, by the one or more processors, a predicted spline comprising (i) a first spline and (ii) a predicted extension of the first spline in a direction in which the imaging system is moving. The first spline describes a boundary of a lane and is generated based on the set of pixels. The predicted extension of the first spline is generated based at least in part on a curvature of at least a portion of the first spline.

Lane detection and tracking techniques for imaging systems

A method for tracking a lane on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes generating, by the one or more processors, a predicted spline comprising (i) a first spline and (ii) a predicted extension of the first spline in a direction in which the imaging system is moving. The first spline describes a boundary of a lane and is generated based on the set of pixels. The predicted extension of the first spline is generated based at least in part on a curvature of at least a portion of the first spline.