G06T2210/12

METHOD OF PRINTING A THREE-DIMENSIONAL OBJECT COMPRISING A PLURALITY OF DISCRETE ELEMENTS
20230052977 · 2023-02-16 · ·

A method of printing a 3D object comprising a plurality of discrete elements, the method comprising: receiving a 3D digital model of a shell group comprising one or more shells representing the plurality of discrete elements; defining, in the 3D digital model, a unifying shell to at least partly envelop one or more shells of the shell group to provide a unified digital model comprising the shell group and the unifying shell; assigning the unifying shell with at least one transparent building material that is transparent upon dispensing and solidifying thereof; assigning the one or more shells of the shell group with one or more building materials; and dispensing, in layers, the at least one transparent building material and the one or more building materials according to the unified digital model to form a 3D object comprising one or more discrete elements that are at least partly connected by a unifying element.

High-Precision Map Construction Method, Apparatus and Electronic Device
20230048643 · 2023-02-16 ·

A high-precision map construction method, apparatus, and electronic device are provided. The method can include: displaying a first color image corresponding to a first track point; according to a first color sub-image and a depth image corresponding to the first track point, obtaining point cloud data corresponding to the first sub-color image, wherein the first sub-color image is a sub-image corresponding to an element to be added in the first color image, and the element to be added is an element to be added in a high-precision map for display; extracting a bounding box corresponding to the point cloud data; and generating a newly-added three-dimensional element according to the bounding box in the high-precision map.

PART INSPECTION SYSTEM HAVING GENERATIVE TRAINING MODEL

A part inspection system includes a vision device configured to image a part being inspected and generate a digital image of the part. The system includes a part inspection module communicatively coupled to the vision device and receives the digital image of the part as an input image. The part inspection module includes a defect detection model. The defect detection model includes a template image. The defect detection model compares the input image to the template image to identify defects. The defect detection model generates an output image. The defect detection model configured to overlay defect identifiers on the output image at the identified defect locations, if any.

GENERATING SYNTHESIZED DIGITAL IMAGES UTILIZING CLASS-SPECIFIC MACHINE-LEARNING MODELS

This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects. The disclosed system generates a modified synthesized digital image by replacing the identified objects in the synthesized digital images with the synthesized objects.

COMPUTER IMPLEMENTED METHODS FOR DENTAL DESIGN

Computer implemented method of generating a dental design, comprising: a) capturing a facial image comprising a head of a patient and a smile; b) displaying it as a first image; c) capturing a 3D intraoral scan; d) aligning the 3D scan to the head; e) determining bounding boxes in the 3D scan, each comprising a single tooth; f) showing a view of the 3D scan and the bounding boxes as a second image; g) showing the bounding boxes as overlay on the first image; i) allowing the bounding boxes to be resized/repositioned; ii) defining a limited set of parameters to characterize the tooth inside the bounding box, and searching a number of candidate matching teeth from a 3D digital library of teeth, and proposing a candidate matching tooth; iii) overlaying the first image with a digital representation of the proposed candidate matching tooth from the digital library.

COMPUTER IMPLEMENTED METHODS FOR DENTAL DESIGN

Computer implemented method of generating a dental design, comprising: (a) capturing a facial image comprising a head of a patient and a smile; (b) displaying it as a first image; (c) capturing a 3D intraoral scan; (d) aligning the 3D scan to the head; (e) determining bounding boxes in the 3D scan, each comprising a single tooth; (f) showing a view of the 3D scan and the bounding boxes as a second image; (g) showing the bounding boxes as overlay on the first image; (i) allowing the bounding boxes to be resized/repositioned; (ii) defining a limited set of parameters to characterize the tooth inside the bounding box, and searching a number of candidate matching teeth from a 3D digital library of teeth, and proposing a candidate matching tooth; (iii) overlaying the first image with a digital representation of the proposed candidate matching tooth from the digital library.

Viewpoint dependent brick selection for fast volumetric reconstruction

A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.

System and method for future forecasting using action priors

A system for method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data and detecting actions associated with agents located within the surrounding environment of the ego vehicle and analyzing the dynamic data and processing an ego motion history of the ego vehicle. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle.

Transferring data from autonomous vehicles
11580687 · 2023-02-14 · ·

A system includes at least one imaging sensor and a processor. The processor is configured to acquire, using the imaging sensor, detected data describing an environment of an autonomous vehicle. The processor is further configured to derive reference data, which describe the environment, from a predefined map, to compute difference data representing a difference between the detected data and the reference data, and to transfer the difference data. Other embodiments are also described.

Method, system and computer readable medium for automatic segmentation of a 3D medical image

A method, a system and a computer readable medium for automatic segmentation of a 3D medical image, the 3D medical image comprising an object to be segmented, the method characterized by comprising: carrying out, by using a machine learning model, in at least two of a first, a second and a third orthogonal orientation, 2D segmentations for the object in slices of the 3D medical image to derive 2D segmentation data; determining a location of a bounding box (10) within the 3D medical image based on the 2D segmentation data, the bounding box (10) having predetermined dimensions; and carrying out a 3D segmentation for the object in the part of the 3D medical image corresponding to the bounding box (10).