Patent classifications
G06V2201/121
SYSTEM AND METHOD FOR 3D SCANNING
A system for capturing a 3D image of a subject includes a detection device which is structured to capture images of the subject and surrounding environment, a projection device which is structured to provide a source of structured light, and a processing unit in communication with the detection device and the projection device. The processing unit is programmed to: analyze an image of the subject captured by the detection device; modify one or more of: the output of the projection device or the intensity of a source of environmental lighting illuminating the subject based on the analysis of the image; and capture a 3D image of the subject with the detection device and the projection device using the modified one or more of the output of the projection device or the intensity of the source of environmental lighting illuminating the subject.
Method and system for automatic quality inspection of materials and virtual material surfaces
The present document describes methods and systems for the automatic inspection of material quality. A set of lights with a geometric pattern is cast on a material to be analyzed. Depending on the material being inspected, same may act as a mirror and the reflected image is captured by a capture device, or the light passes through the material being inspected and the image is captured by a capture device. Defects in the material can be detected by the distortion caused by same in the pattern of the reflected image or passing through. Finally, software is used to identify and locate these distortions, and consequently the defects in the material. This classification of defects is carried out using artificial intelligence techniques.
System and method for identifying items
The method for item recognition can include: optionally calibrating a sampling system, determining visual data using the sampling system, determining a point cloud, determining region masks based on the point cloud, generating a surface reconstruction for each item, generating image segments for each item based on the surface reconstruction, and determining a class identifier for each item using the respective image segments.
DUAL-PATTERN OPTICAL 3D DIMENSIONING
An optical dimensioning system includes one or more light emitting assemblies configured to project one or more predetermined patterns on an object; an imaging assembly configured to sense light scattered and/or reflected off the object, and to capture an image of the object while the patterns are projected; and a processing assembly configured to analyze the image of the object to determine one or more dimension parameters of the object. The light emitting assembly may include a single piece optical component configured for producing a first pattern and second pattern. The patterns may be distinguishable based on directional filtering, feature detection, feature shift detection, or the like. A method for optical dimensioning includes illuminating an object with at least two detectable patterns; and calculating dimensions of the object by analyzing pattern separate of the elements comprising the projected patterns. One or more pattern generators may produce the patterns.
STRUCTURED LIGHT PROJECTION SYSTEM INCLUDING NARROW BEAM DIVERGENCE SEMICONDUCTOR SOURCES
Structured light projection system include narrow beam divergence semiconductor sources. The structured light projector system includes an array of narrow beam divergence semiconductor sources, and a projection lens operable to generate an image of the array of narrow beam divergence semiconductor source. Each narrow beam divergence semiconductor source can include an extended length mirror that helps suppress one or more longitudinal and/or transverse modes such that the beam divergence and/or the spectral width of emission is substantially reduced.
OBJECT ANNOTATION METHOD AND APPARATUS, MOVEMENT CONTROL METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
Embodiments of this application disclose an object annotation method and apparatus, a movement control method and apparatus, a device, and a storage medium. The method includes: obtaining a reference image recorded by an image sensor from an environment space, the reference image comprising at least one reference object; obtaining target point cloud data obtained by a three-dimensional space sensor by scanning the environment space, the target point cloud data indicating a three-dimensional space region occupied by a target object in the environment space; determining a target reference object corresponding to the target object from the reference image; determining a projection size of the three-dimensional space region corresponding to the target point cloud data and the three-dimensional space region being projected onto the reference image; and performing three-dimensional annotation on the target reference object in the reference image according to the determined projection size.
Methods and apparatus for imaging and 3D shape reconstruction
An otoscope may project a temporal sequence of phase-shifted fringe patterns onto an eardrum, while a camera in the otoscope captures images. A computer may calculate a global component of these images. Based on this global component, the computer may output an image of the middle ear and eardrum. This image may show middle ear structures, such as the stapes and incus. Thus, the otoscope may see through the eardrum to visualize the middle ear. The otoscope may project another temporal sequence of phase-shifted fringe patterns onto the eardrum, while the camera captures additional images. The computer may subtract a fraction of the global component from each of these additional images. Based on the resulting direct-component images, the computer may calculate a 3D map of the eardrum.
3D model reconstruction method, electronic device, and non-transitory computer readable storage medium thereof
A 3D (three dimensional) model reconstruction method that includes the steps outlined below. Depth data of a target object corresponding to a current time spot is received. Camera pose data of the depth camera corresponding to the current time spot is received. Posed 3D point clouds corresponding to the current time spot are generated according to the depth data and the camera pose data. Posed estimated point clouds corresponding to the current time spot are generated according to the camera pose data corresponding to the current time spot and a previous 3D model corresponding to a previous time spot. A current 3D model of the target object is generated according to the posed 3D point clouds based on a difference between the posed 3D point clouds and the posed estimated point clouds.
System and method for 3D scanning
A system for capturing a 3D image of a subject includes a detection device which is structured to capture images of the subject and surrounding environment, a projection device which is structured to provide a source of structured light, and a processing unit in communication with the detection device and the projection device. The processing unit is programmed to: analyze an image of the subject captured by the detection device; modify one or more of: the output of the projection device or the intensity of a source of environmental lighting illuminating the subject based on the analysis of the image; and capture a 3D image of the subject with the detection device and the projection device using the modified one or more of the output of the projection device or the intensity of the source of environmental lighting illuminating the subject.
Apparatuses, systems and methods for generating a vehicle driver signature
Apparatuses, systems and methods are provided for generating a vehicle driver signature. More particularly, apparatuses, systems and methods are provided for generating a vehicle driver signature based on vehicle interior image data.