Patent classifications
G06V20/653
Systems and methods for generating instructions for adjusting stock eyewear frames using a 3D scan of facial features
Systems and methods are disclosed for generating a 3D computer model of an eyewear product, using a computer system, the method including obtaining an inventory comprising a plurality of product frames; scanning a user's anatomy; extracting measurements of the user's anatomy; obtaining a first model of a contour and/or surface of the user's anatomy, based on the extracted measurements of the user's anatomy; identifying, based on the contour and/or the surface of the user's anatomy, a first product frame among the plurality of product frames; determining adjustments to the first product frame based on the contour and/or the surface of the user's anatomy; generating a second model rendering comprising the adjusted first product frame matching the contours and/or the surface of the user's anatomy.
Cameras for emergency rescue
A system for managing treatment of a person in need of emergency assistance is provided. The system includes at least one camera configured to be mounted to a person in need of medical assistance. The system also includes an image processing device, which can be configured to receive images captured by the at least one camera and to process the images to generate a representation of a rescue scene surrounding the person. The system further includes an analysis device. The analysis device can be configured to determine a characteristic associated with a resuscitation activity based on analysis of the representation of the rescue scene generated by the image processing device. A computer-implemented method for managing treatment of a person in need of emergency assistance is also provided.
METHOD FOR MONITORING AN ORTHODONTIC TREATMENT
A method for monitoring the positioning of the teeth including production of a three-dimensional digital initial reference model of the arches of the patient and, for each tooth, definition, from the initial reference model, of a three-dimensional digital reference tooth model; acquisition of updated image of at least one two-dimensional image of the arches in actual acquisition conditions; analysis of each updated image and production, for each updated image, of an updated map; optionally, determination, for each updated image, of rough virtual acquisition conditions approximating the actual acquisition conditions; searching, for each updated image, for a final reference model corresponding to the positioning of the teeth during the acquisition of the updated image, for each tooth model, comparison of the positionings of the tooth model in the initial reference model and in the reference model obtained at the end of the preceding steps to determine the movement of the teeth
Systems and methods for a 3D home model for visualizing proposed changes to home
The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives LIDAR data generated from a LIDAR camera; measures a plurality of dimensions of a room of the home based upon processor analysis of the LIDAR data; builds a 3D model of the room based upon the measured plurality of dimensions; receives an indication of a proposed change to the room; modifies the 3D model to include the proposed change to the room; and displays a representation of the modified 3D model.
Extended reality space generating apparatus and method
An extended reality space generating apparatus and method are provided. The extended reality space generating apparatus generates a plurality of plane plates, a plate coordinate and a normal vector corresponding to each of the plane plates based on a plurality of point clouds, wherein the point clouds correspond to a real space. The extended reality space generating apparatus compares the plate coordinates and the normal vectors of the plane plates in a visual window to generate an effective plane plate set. The extended reality space generating apparatus generates an extended reality space corresponding to the real space based on the effective plane plate set.
METHOD FOR GENERATING A THREE DIMENSIONAL, 3D, MODEL
A method performed by a computer device is configured to generate a three dimensional, 3D, model, the method including obtaining a plurality of two dimensional, 2D, images, the plurality of 2D images, each depicting a 3D object and a background of the 3D object from a different viewing direction, wherein the plurality of 2D images are obtained using a camera, generating a total set of key points for each of the plurality of 2D images, discriminating (530) each total set of key points into a first subset of key points depicting the 3D object and a second subset of key points.
METHOD TO FACILITATE MASS CONVERSION OF 2D DRAWINGS TO 3D MODELS
An internet or cloud-based system, method, or platform (“platform”) used to facilitate the conversion of electronic two-dimensional drawings to three-dimensional models. A group of people (“crowd”) that has been found qualified to make such conversions, are selected for the conversion. The two-dimensional drawings are transmitted to the crowd for conversion to three-dimensional models. In some embodiments, multiple instances of the same two-dimensional drawings (or image data) is sent to multiple, independent crowd members in order that multiple versions of the same three-dimensional model can be created. Once the models are complete and returned, they are compared to each other on multiple features or characteristics. If two or more three-dimensional models are found to match within the prescribed tolerances, they are determined to be an accurate representation of the product or device shown in the two-dimensional drawings. In some embodiments, the two-dimensional drawings can be divided into subparts and submitted to different crowd members for conversion.
Parameterized model of 2D articulated human shape
Disclosed are computer-readable devices, systems and methods for generating a model of a clothed body. The method includes generating a model of an unclothed human body, the model capturing a shape or a pose of the unclothed human body, determining two-dimensional contours associated with the model, and computing deformations by aligning a contour of a clothed human body with a contour of the unclothed human body. Based on the two-dimensional contours and the deformations, the method includes generating a first two-dimensional model of the unclothed human body, the first two-dimensional model factoring the deformations of the unclothed human body into one or more of a shape variation component, a viewpoint change, and a pose variation and learning an eigen-clothing model using principal component analysis applied to the deformations, wherein the eigen-clothing model classifies different types of clothing, to yield a second two-dimensional model of a clothed human body.
Anonymization apparatus, surveillance device, method, computer program and storage medium
An anonymization apparatus 6 is proposed for the generation of anonymized images 9, wherein surveillance images 5 are provided through video surveillance of a surveillance region 3 by means of at least one camera 2, with a recognition module 11, wherein the surveillance images 5 are provided to the recognition module 11, wherein the recognition module 11 is configured to recognize persons 4 contained in the surveillance images 5, with a processing module 13, wherein the processing module 13 is configured to process the surveillance images 5 into the anonymized images 9, wherein at least one person 4 or person segment included in the surveillance images 5 is anonymized in the anonymized images 9, wherein the processing module 13 is configured to replace the recognized person 4 or person segment by an animated person model 14 for the purpose of anonymization.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
A subject feature detection unit (53) (detection unit) of a mobile terminal (80) (information processing device) detects a line-of-sight direction (E) (feature) of a subject (92) displayed simultaneously with a 3D model (90M) in a captured image (I). Then, the display control unit (54) (control unit) changes a line-of-sight direction (F) (display mode) of the 3D model (90M) so that the 3D model (90M) faces the camera (84) in accordance with the line-of-sight direction (E) of the subject (92) detected by the subject feature detection unit (53).