Patent classifications
G06T2207/10028
Static obstacle map based perception system
The offline map generation process may collect multiple point cloud data of the same area. A perception algorithm may operate on the point cloud data to detect static objects, which may be fixed road features that do not change among the point cloud data, allowing the perception algorithm to more accurately detect the static objects. During online operation of the ADV through the area, the ADV may trim regions-of-interest (ROI) of the area to exclude the predefined static objects. The perception algorithm may execute the sensor data of the ROI in real-time to detect objects in the ROI. The may be added back to the output of the perception algorithm to complete the perception output.
Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping
An electronic device and method for 3D modeling based on 2D warping is disclosed. The electronic device acquires a color image of a face of a user, depth information corresponding to the color image, and a point cloud of the face. A 3D mean-shape model of a reference 3D face is acquired, and rigid aligned with the point cloud. A 2D projection of the aligned 3D mean-shape model is generated. The 2D projection includes a set of landmark points associated with the aligned 3D mean-shape model. The 2D projection is warped such that the set of landmark points in the 2D projection is aligned with a corresponding set of feature points in the color image. A 3D correspondence between the aligned 3D mean-shape model and the point cloud is determined for a non-rigid alignment of the aligned 3D mean-shape model, based on the warped 2D projection and the depth information.
Arrangement for producing head related transfer function filters
When three-dimensional audio is produced by using headphones, particular HRTF-filters are used to modify sound for the left and right channels of the headphone. As the morphology of every ear is different, it is beneficial to have HRTF-filters particularly designed for the user of headphones. Such filters may be produced by deriving ear geometry from a plurality of images taken with an ordinary camera, detecting necessary features from images and fitting said features to a model that has been produced from accurately scanned ears comprising representative values for different sizes and shapes. Taken images are sent to a server (52) that performs the necessary computations and submits the data further or produces the requested filter.
Systems and methods for scanning three-dimensional objects
According to at least one aspect, a system for scanning an object is provided. The system comprises at least one hardware processor; and at least one non-transitory computer-readable storage medium storing processor executable instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform: generating a first 3-dimensional (3D) model of the object; identifying a set of imaging positions from which to capture at least one image based on the first 3D model of the object; obtaining a set of images of the object captured at, or approximately at, the set of imaging positions; and generating a second 3D model of the object based on the set of images.
Mobile terminal and control method therefor
The present invention relates to a device and a control method therefor and, more specifically, the device comprises: a memory for storing at least one command; a depth camera for capturing at least one hand of a user; a display module; and a controller for controlling the memory, the depth camera, and the display module. The controller controls the depth camera so as to capture the at least one hand of a user and controls the display module so as to output a visual feedback that changes on the basis of the captured hand of a user.
Systems and methods for producing amodal cuboids
Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, loose-fit cuboids overlaid on 3D graphs so as to each encompass LiDAR data points associated with a given object; defining, by the computing device, an amodal cuboid based on the loose-fit cuboids; using, by the computing device, the amodal cuboid to train a machine learning algorithm to detect objects of a given class using sensor data generated by sensors of the autonomous vehicle or another vehicle; and causing, by the computing device, operations of the autonomous vehicle to be controlled using the machine learning algorithm.
Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
A machine vision-based method and system to facilitate the unloading of a pile of cartons within a work cell are provided. The method includes the step of providing at least one 3-D or depth sensor having a field of view at the work cell. Each sensor has a set of radiation sensing elements which detect reflected, projected radiation to obtain 3-D sensor data. The 3-D sensor data including a plurality of pixels. For each possible pixel location and each possible carton orientation, the method includes generating a hypothesis that a carton with a known structure appears at that pixel location with that container orientation to obtain a plurality of hypotheses. The method further includes ranking the plurality of hypotheses. The step of ranking includes calculating a surprisal for each of the hypotheses to obtain a plurality of surprisals. The step of ranking is based on the surprisals of the hypotheses.
Training of joint depth prediction and completion
System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.
Determining Spatial Relationship Between Upper Teeth and Facial Skeleton
A computer-implemented method includes receiving a 3D model representative of upper teeth (U1) of a patient (P) and receiving a plurality of images of a face of the patient (P). The method also includes generating a facial model (200) of the patient based on the received plurality of images and determining, based on the determined facial model (200), the received 3D model of 10 the upper teeth (U1) and the plurality of images, a spatial relationship between the upper teeth (U1) of the patient (P) and a facial skeleton of the patient (P).
METHOD FOR PRODUCING AND CLASSIFYING POLYCRYSTALLINE SILICON
A method for producing and classifying polycrystalline silicon. The method includes producing polycrystalline silicon rod within a reaction space of a gas phase deposition reactor by introducing a reaction gas, which in addition to hydrogen contains silane and/or at least one halosilane. Once produced, the polycrystalline silicon rod is extracted from the reactor, and at least one two-dimensional and/or three-dimensional image is generated of at least one partial region of the polycrystalline silicon rod or of at least one silicon chunk created. At least one analysis region is selected per generated image and at least two surface-structure indices per analysis region are generated by using image processing methods, each of which is generated using a different image processing method. The surface-structure indices are combined to form a morphology index.