Patent classifications
G06T2219/012
INTEGRATION OF A TWO-DIMENSIONAL INPUT DEVICE INTO A THREE-DIMENSIONAL COMPUTING ENVIRONMENT
A workstation enables operation of a 2D input device with a 3D interface. A cursor position engine determines the 3D position of a cursor controlled by the 2D input device as the cursor moves within a 3D scene displayed on a 3D display. The cursor position engine determines the 3D position of the cursor for a current frame of the 3D scene based on a current user viewpoint, a current mouse movement, a CD gain value, a Voronoi diagram, and an interpolation algorithm, such as the Laplacian algorithm. A CD gain engine computes CD gain optimized for the 2D input device operating with the 3D interface. The CD gain engine determines the CD gain based on specifications for the 2D input device and the 3D display. The techniques performed by the cursor position engine and the techniques performed by the CD gain engine can be performed separately or in conjunction.
PERSONALIZED GARMENT FIT COMPARISON AND EVALUATION
Method and system for facilitating personalized garment fit evaluations over computer network, irrespective of non-uniform garment sizing options. 2D image of a selected garment offered for purchase is received. 2D image of a reference garment having preferred fit for customer and of same garment type as selected garment is received. Selected garment image and reference garment image includes a view of the selected/reference garment flattened against a surface and a measurement reference scale enabling measurement along any two image points. Selected garment and/or reference garment is transformed into a proportional image in which distances between points are presented along a common scale. Selected garment image is compared with reference garment image, and an indication of at least one fit deviation measurement of selected garment relative to reference garment is provided. A fit compatibility of selected garment may be determined and provided based on fit deviation measurement and garment fitting property.
ASSESSING PROPERTY DAMAGE USING A 3D POINT CLOUD OF A SCANNED PROPERTY
A damage assessment module operating on a computer system automatically evaluates a property, estimating damage to the property by analyzing a point cloud of a property. The damage assessment module identifies individual point clusters or segments from the point cloud and detects potentially damaged areas of the property surface by identifying outlier points in the point clusters. The damage assessment module may be used to determine the financial cost of the damage and/or determine whether the property should be replaced or repaired. In addition to eliminating the need for an estimator to visit the property in person, the damage assessment module improves the consistency and accuracy associated with estimating damage to a property.
Systems and methods for automated teller machine repair
One embodiment of the disclosure relates to a system of servicing an automated teller machine (ATM) using a display configured to display augmented reality images. The electronic device may be configured to receive low-level diagnostic information from the malfunctioning ATM. Data may be transferred to a central location, wherein the data is analyzed and a solution to the problem of the malfunctioning ATM is determined and transferred back to the electronic device. In some embodiments, the electronic device displays components of the ATM using augmented reality such that a technician viewing the display may visualize the components of the ATM and receive instructions regarding the repair or care of the ATM.
Systems and methods for automated teller machine repair
An automated teller machine (ATM) diagnostic and repair system includes an image capture device, a display, a processor, and a memory. The image capture device is configured to capture at least one of images or videos. The memory includes instructions stored thereon that, when executed by the processor, cause the processor to receive diagnostic data from an ATM. The instructions, when executed by the processor, further cause the processor to capture at least one of an image or a video of the ATM using the image capture device. The instructions, when executed by the processor, further cause the processor to receive a selection of a particular component of the ATM from a user and to provide at least one of an augmented image or an augmented video of the ATM including a modified view of the particular component.
Automated measurement of interior spaces through guided modeling of dimensions
Introduced here computer programs and associated computer-implemented techniques for establishing the dimensions of interior spaces. These computer programs are able to accomplish this by combining knowledge of these interior spaces with spatial information that is output by an augmented reality (AR) framework. Such an approach allows two-dimensional (2D) layouts to be seamlessly created through guided corner-to-corner measurement of interior spaces.
ESTIMATING DIMENSIONS OF GEO-REFERENCED GROUND-LEVEL IMAGERY USING ORTHOGONAL IMAGERY
A system and method is provided for measurements of building facade elements by combining ground-level and orthogonal imagery. The measurements of the dimension of building facade elements are based on ground-level imagery that is scaled and geo-referenced using orthogonal imagery. The method continues by creating a tabular dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows or doors. The tabular dataset can be part of an estimate report.
Dual mode control of virtual objects in 3D space
Systems, methods, and non-transitory computer readable media containing instructions for selectively controlling display of virtual objects are provided. In one implementation, virtual objects may be virtually presented in an environment via a wearable extended reality appliance operable in a first and second display modes; in the first display mode, positions of the virtual objects are maintained in the environment regardless of detected movements of the wearable extended reality appliance, and in the second display mode, the virtual objects move in the environment in response to detected movements of the wearable extended reality appliance; movement of the wearable extended reality appliance may be detected; selection of the first or second display mode may be received; display signals configured to present the virtual objects in a manner consistent with the selected display mode may be outputted for presentation via the wearable extended reality appliance in response to the selected display mode.
Generation of Product Mesh and Product Dimensions from User Image Data using Deep Learning Networks
The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
Dynamic Adjustment of Cross-Sectional Views
An example computing system is configured to (i) receive a request to generate a cross-sectional view of a three-dimensional drawing file, where the cross-sectional view is based on a location of a cross-section line within the three-dimensional drawing file and includes an intersection of two meshes within the three-dimensional drawing file; (ii) generate the cross-sectional view of the three-dimensional drawing file; (iii) add, to the generated cross-sectional view, dimensioning information involving at least one of the two meshes; (iv) generate one or more controls for adjusting a location of the cross-section line within the three-dimensional drawing file; and (v) based on an input indicating a selection of the one or more controls, adjust the location of the cross-section line within the three-dimensional drawing file, update the cross-sectional view based on the adjusted location of the cross-section line, and update the dimensioning information to correspond to the updated cross-sectional view.