G06V20/647

ASSEMBLY BODY CHANGE DETECTION METHOD, DEVICE AND MEDIUM BASED ON ATTENTION MECHANISM
20220358334 · 2022-11-10 ·

An assembly change detection method based on attention mechanism, including: establishing a three-dimensional model of an assembly body, adding a tag to each part in the three-dimensional model, setting several assembly nodes, obtaining depth images of the three-dimensional model under each assembly node in different viewing angles, and obtaining a change tag image of a added part at each assembly node; selecting two depth images at front and back moments in different viewing angles as training samples; performing semantic fusion, feature extraction, attention mechanism processing and metric learning sequentially on the training samples, training a detection model, continuously selecting training samples to train the detection model, saving model parameters with optimal similarity during training, completing training; and obtaining depth images of successive assembly nodes during assembling the assembly body, inputting depth images into trained detection model, and outputting change image of added part of the assembly body during assembly.

AUTOMATIC ANNOTATION USING GROUND TRUTH DATA FOR MACHINE LEARNING MODELS

This disclosure describes systems, methods, and devices related to automatic annotation. A device may capture data associated with an image comprising an object. The device may acquire input data associated with the object. The device may estimate a plurality of points within a frame of the image, wherein the plurality of point constitute a 3D bounding to around the object. The device may transform the plurality of points to two or more 2D points. The device may construct a bounding box that encapsulates the object using the two or more 2D points. The device may create a segmentation mask of the object using morphological techniques. The device may perform annotation based on the segmentation mask.

CHANGE DETECTION AND CHARACTERIZATION OF ASSETS

A method for determining a change in an asset and/or a characterization of an asset is provided. In an embodiment, the method can include receiving first data characterizing a target site including one or more assets. The method can also include generating a three-dimensional model of the target site based on the first data. The method can further include registering the first data with the three-dimensional model. The method can also include generating at least one three-dimensional projection onto at least one asset of the one or more assets included in the first data. The method can further include determining second data characterizing the at least one asset based on the at least on three-dimensional projection and providing the second data. In some embodiments, the method can be performed by systems or stored as instructions on computer readable mediums described herein.

METHOD AND SYSTEM FOR SURGICAL INSTRUMENTATION SETUP AND USER PREFERENCES

A method of notifying personnel in an operating room of a potential hazard comprising, at a computing system: capturing at least one image of the operating room by at least one camera that can detect light outside of a visible spectrum; determining that at least one potential hazard is in the operating room based on analysis of the at least one image; and in response to determining that the at least one potential hazard is in the operating room, generating at least one notification for notifying the personnel in the operating room that there is the at least one potential hazard in the operating room.

Computer vision systems and methods for modeling three-dimensional structures using two-dimensional segments detected in digital aerial images

A system for modeling a three-dimensional structure utilizing two-dimensional segments comprising a memory and a processor in communication with the memory. The processor extracts a plurality of two-dimensional segments corresponding to the three-dimensional structure from a plurality of images indicative of different views of the three-dimensional structure. The processor determines a plurality of three-dimensional candidate segments based on the extracted plurality of two-dimensional segments and adds the plurality of three-dimensional candidate segments to a three-dimensional segment cloud. The processor transforms the three-dimensional segment cloud into a wireframe indicative of the three-dimensional structure by performing a wireframe extraction process on the three-dimensional segment cloud.

SCANNING OF 3D OBJECTS FOR INSERTION INTO AN AUGMENTED REALITY ENVIRONMENT
20230098160 · 2023-03-30 ·

A method for inserting a virtual object into an augmented reality environment, is provided, including: capturing one or more images of a real world object; processing the one or more images to identify a type of the real world object; based on the identified type of the real world object, generating a virtual object that resembles the real world object; using the identified type of the real world object to assign a functionality to the virtual object, the functionality defining a utility of the virtual object in an augmented reality environment, wherein assigning the functionality enables an action capable of being performed by the virtual object in the augmented reality environment that provides said utility; deploying the virtual object in the augmented reality environment.

THREE-DIMENSIONAL-OBJECT DETECTION DEVICE, ON-VEHICLE SYSTEM, AND THREE-DIMENSIONAL-OBJECT DETECTION METHOD

The present invention improves the accuracy of detection of a three-dimensional object. A three-dimensional object detecting device generates a mask image 90 that masks regions outside a three-dimensional object candidate region in a difference image G of a first overhead image F1 and a second overhead view image F2 for which the imaging locations O are mutually aligned, identifies a near ground contact line L1 of a three-dimensional object based on a masked difference image wherein the difference image G is masked with the mask image 90, finds an end point of the three-dimensional object based on the masked difference image Gm, identifies the width of the three-dimensional object based on a distance between a non-masking region boundary N and the end point V of the three-dimensional object in the mask image 90, identifies a far ground contact line L2 of the three-dimensional object based on the width of the three-dimensional object and the near ground contact line L1, and identifies the location of the three-dimensional object in the difference image G based on the near ground contact line L1 and the far ground contact line L2.

APPARATUS AND METHOD OF CONVERTING DIGITAL IMAGES TO THREE-DIMENSIONAL CONSTRUCTION IMAGES
20230099352 · 2023-03-30 ·

A method implemented with instructions executed by a processor includes receiving a digital image of an interior space. At least one detected object is identified within the digital image. Dimensions of the detected object are determined. Image segmentation is applied to the digital image to produce a segmented image. Edges are detected in the segmented image to produce a combined output image. Geometric transformation, field of view and depth correction are applied to the combined output image to correct for image distortion to produce a geometrically transformed digital image. Dimensions are applied to the geometrically transformed digital image at least partially based on the dimensions of the detected object to produce a dimensionalized floorplan.

CREATION METHOD OF TRAINED MODEL, IMAGE GENERATION METHOD, AND IMAGE PROCESSING DEVICE

In a creation method of a trained model, a reconstructed image (60) obtained by reconstructing three-dimensional X-ray image data (80) is generated. A projection image (61) is generated from a three-dimensional model of an image element (50) by a simulation. The projection image is superimposed on the reconstructed image to generate a superimposed image (67). A trained model (40) is created by performing machine learning using the superimposed image, and the reconstructed image or the projection image.

DETERMINING THE STEERING ANGLE OF A LANDING GEAR ASSEMBLY OF AN AIRCRAFT
20230094156 · 2023-03-30 ·

A method of determining the steering angle of a landing gear assembly of an aircraft is disclosed including scanning the landing gear assembly with a lidar system to generate a set of three-dimensional position data points, each position data point including a set of three orthogonal position values. A two-dimensional image from the set of three-dimensional position data points, by converting a position value of each of the three-dimensional position data points to an image property value of a set of image property values. A boundary of an area of the two-dimensional image of which each position data point has the same image property value is identified, where the area corresponds to a component of the landing gear assembly. The steering angle of the landing gear assembly is then determined from the shape and/or orientation of the identified boundary.