G06T7/70

PART INSPECTION SYSTEM HAVING GENERATIVE TRAINING MODEL

A part inspection system includes a vision device configured to image a part being inspected and generate a digital image of the part. The system includes a part inspection module communicatively coupled to the vision device and receives the digital image of the part as an input image. The part inspection module includes a defect detection model. The defect detection model includes a template image. The defect detection model compares the input image to the template image to identify defects. The defect detection model generates an output image. The defect detection model configured to overlay defect identifiers on the output image at the identified defect locations, if any.

PART INSPECTION SYSTEM HAVING GENERATIVE TRAINING MODEL

A part inspection system includes a vision device configured to image a part being inspected and generate a digital image of the part. The system includes a part inspection module communicatively coupled to the vision device and receives the digital image of the part as an input image. The part inspection module includes a defect detection model. The defect detection model includes a template image. The defect detection model compares the input image to the template image to identify defects. The defect detection model generates an output image. The defect detection model configured to overlay defect identifiers on the output image at the identified defect locations, if any.

TECHNIQUES FOR THREE-DIMENSIONAL ANALYSIS OF SPACES

An example method includes receiving a 2D image of a 3D space from an optical camera, identifying, in the 2D image. A virtual image generated by an optical instrument refracting and/or reflecting the light is identified. The example method further includes identifying, in the 2D image, a first object depicting a subject disposed in the 3D space from a first direction extending from the optical camera to the subject and identifying, in the virtual image, a second object depicting the subject disposed in the 3D space from a second direction extending from the optical camera to the subject via the optical instrument, the second direction being different than the first direction. A 3D image depicting the subject based on the first object and the second object is generated. Alternatively, a location of the subject in the 3D space is determined based on the first object and the second object.

TECHNIQUES FOR THREE-DIMENSIONAL ANALYSIS OF SPACES

An example method includes receiving a 2D image of a 3D space from an optical camera, identifying, in the 2D image. A virtual image generated by an optical instrument refracting and/or reflecting the light is identified. The example method further includes identifying, in the 2D image, a first object depicting a subject disposed in the 3D space from a first direction extending from the optical camera to the subject and identifying, in the virtual image, a second object depicting the subject disposed in the 3D space from a second direction extending from the optical camera to the subject via the optical instrument, the second direction being different than the first direction. A 3D image depicting the subject based on the first object and the second object is generated. Alternatively, a location of the subject in the 3D space is determined based on the first object and the second object.

IDENTIFICATION OF SPURIOUS RADAR DETECTIONS IN AUTONOMOUS VEHICLE APPLICATIONS
20230046274 · 2023-02-16 ·

The described aspects and implementations enable fast and accurate verification of radar detection of objects in autonomous vehicle (AV) applications using combined processing of radar data and camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar data characterizing intensity of radar reflections from an environment of the AV, identifying, based on the radar data, a candidate object, obtaining a camera image depicting a region where the candidate object is located, and processing the radar data and the camera image using one or more machine-learning models to obtain a classification measure representing a likelihood that the candidate object is a real object.

METHOD AND SYSTEM FOR ANALYZING VIEWING DIRECTION OF ELECTRONIC COMPONENT, COMPUTER PROGRAM PRODUCT WITH STORED PROGRAM, AND COMPUTER READABLE MEDIUM WITH STORED PROGRAM

A method for analyzing a viewing direction of an electronic component includes inputting a package type and a file image of an electronic component, with the file image having at least one engineering drawing image, and the at least one engineering drawing image being a view of the electronic component in at least one viewing direction; querying and acquiring a viewing direction detection model meeting the package type from a database, with the database storing respective viewing direction detection models of different package types of electronic components; inputting the file image into the viewing direction detection model of the package type to identify the viewing direction of the at least one engineering drawing image; and outputting the viewing direction of the at least one engineering drawing image of the electronic component.

METHOD AND SYSTEM FOR ANALYZING VIEWING DIRECTION OF ELECTRONIC COMPONENT, COMPUTER PROGRAM PRODUCT WITH STORED PROGRAM, AND COMPUTER READABLE MEDIUM WITH STORED PROGRAM

A method for analyzing a viewing direction of an electronic component includes inputting a package type and a file image of an electronic component, with the file image having at least one engineering drawing image, and the at least one engineering drawing image being a view of the electronic component in at least one viewing direction; querying and acquiring a viewing direction detection model meeting the package type from a database, with the database storing respective viewing direction detection models of different package types of electronic components; inputting the file image into the viewing direction detection model of the package type to identify the viewing direction of the at least one engineering drawing image; and outputting the viewing direction of the at least one engineering drawing image of the electronic component.

INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: acquire a captured image of an object; specify a first area of the object in the captured image, the first area being an area occupied by a work target that is a target to be worked on; process the captured image to make a second area other than the first area invisible to generate a processed image; in response to a change in the first area with a deformation of the work target, apply a deformation area instead of the first area to make a second area obtained by the application invisible to generate a processed image, the deformation area being an area defined by a pre-registered shape of the work target after deformation; and transmit the processed image.

INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: acquire a captured image of an object; specify a first area of the object in the captured image, the first area being an area occupied by a work target that is a target to be worked on; process the captured image to make a second area other than the first area invisible to generate a processed image; in response to a change in the first area with a deformation of the work target, apply a deformation area instead of the first area to make a second area obtained by the application invisible to generate a processed image, the deformation area being an area defined by a pre-registered shape of the work target after deformation; and transmit the processed image.

Graphical element rooftop reconstruction in digital map
11580649 · 2023-02-14 · ·

A client device receives a first map tile, a second map tile, and map terrain data from a mapping system, the first and second map tiles together including map feature having a geometric base with a height value, the geometric base represented by a set of vertices split across the first and second map tiles. The client device identifies edges of the geometric base that intersect a tile border between the first and second map tiles. The client device determines a set of sample points based on the identified edges and determines a particular sample elevation value corresponding to a sample point in the set. The client device renders the map feature based on the particular sample elevation value and displays the rendering of the map feature.