Patent classifications
G06T7/586
METHOD FOR CREATING A DIGITAL TWIN OF AN INFRASTRUCTURE COMPONENT
Systems and methods for creating a digital twin of an infrastructure component. The digital twin is a computerized, three-dimensional model of the component, typically a pipe, created after manufacture but before installation. The digital twin can be saved on a computer-readable storage medium for later retrieval, and can be loaded into three-dimensional modeling software for manipulation and viewing from various angles and perspectives. The twin is created from a plurality of imaging systems capturing different surfaces or different aspects, whose measurements are mapped to a uniform coordinate system to generate a three-dimensional model. Other data may also be added to or stored with the digital twin, such as manufacturing specifications, photographic data, and current or historical inspection data. The digital twin may be viewed on a mobile device programmed to receive, display, and allow the user to view and manipulate the digital twin.
METHOD FOR CREATING A DIGITAL TWIN OF AN INFRASTRUCTURE COMPONENT
Systems and methods for creating a digital twin of an infrastructure component. The digital twin is a computerized, three-dimensional model of the component, typically a pipe, created after manufacture but before installation. The digital twin can be saved on a computer-readable storage medium for later retrieval, and can be loaded into three-dimensional modeling software for manipulation and viewing from various angles and perspectives. The twin is created from a plurality of imaging systems capturing different surfaces or different aspects, whose measurements are mapped to a uniform coordinate system to generate a three-dimensional model. Other data may also be added to or stored with the digital twin, such as manufacturing specifications, photographic data, and current or historical inspection data. The digital twin may be viewed on a mobile device programmed to receive, display, and allow the user to view and manipulate the digital twin.
Tangible object virtualization station
A tangible object virtualization station including a base capable of stably resting on a surface and a head component unit connected to the base. The head component unit extends upwardly from the base. At an end of the head component opposite the base, the head component comprises a camera situated to capture a downward view of the surface proximate the base, a lighting array that directs light downward toward the surface proximate the base. The tangible object virtualization station further comprises a display interface included in the base. The display interface is configured to hold a display device in an upright position and connect the display device to the camera and the lighting array.
Tangible object virtualization station
A tangible object virtualization station including a base capable of stably resting on a surface and a head component unit connected to the base. The head component unit extends upwardly from the base. At an end of the head component opposite the base, the head component comprises a camera situated to capture a downward view of the surface proximate the base, a lighting array that directs light downward toward the surface proximate the base. The tangible object virtualization station further comprises a display interface included in the base. The display interface is configured to hold a display device in an upright position and connect the display device to the camera and the lighting array.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing apparatus includes: a first obtainment unit configured to obtain a plurality of captured images of an object captured under a plurality of image capture conditions; a second obtainment unit configured to obtain information on a material of the object; and an estimation unit configured to estimate a distribution of normals of the object based on the plurality of captured images and the information on the material.
VEHICLE IMAGING STATION
A vehicle imaging station for capturing images of scratches on a vehicle, the vehicle imaging station including a tunnel having an entrance and an exit with one or more walls defining an enclosure between the entrance and exit to define a tunnel volume containing a vehicle pathway having a central axis. The station further includes a relatively bright reflection surface; a relatively dark reflection surface; and a camera array including one or more cameras arranged with: a first field of view including a first portion of the tunnel volume in which a relatively bright image defined by the relatively bright reflection surface will be reflected to be visible to the camera array by a vehicle moving along the vehicle pathway; a second field of view including a second portion of the tunnel volume in which a relatively dark image defined by the relatively dark reflection surface will be reflected to be visible to the camera array by a vehicle moving along the vehicle pathway.
VEHICLE IMAGING STATION
A vehicle imaging station for capturing images of scratches on a vehicle, the vehicle imaging station including a tunnel having an entrance and an exit with one or more walls defining an enclosure between the entrance and exit to define a tunnel volume containing a vehicle pathway having a central axis. The station further includes a relatively bright reflection surface; a relatively dark reflection surface; and a camera array including one or more cameras arranged with: a first field of view including a first portion of the tunnel volume in which a relatively bright image defined by the relatively bright reflection surface will be reflected to be visible to the camera array by a vehicle moving along the vehicle pathway; a second field of view including a second portion of the tunnel volume in which a relatively dark image defined by the relatively dark reflection surface will be reflected to be visible to the camera array by a vehicle moving along the vehicle pathway.
Determining the relative position between a point cloud generating camera and another camera
A method for determining the relative position between a first camera and a second camera used in a medical application, wherein the first camera captures a 2D image of a phantom, the second camera emits light onto the phantom and analyzes the reflected light, thus generating a 3D point cloud representing points on the surface of the phantom, and the phantom has a planar surface forming a background on which a plurality of 2D markers are formed, wherein one of the background and the 2D markers is reflective, thus reflecting light emitted by the second camera back to the second camera, and the other one is non-reflective, thus not reflecting light emitted by the second camera back to the second camera, the method involving that a) the first camera captures a 2D image of the phantom, b) the second camera generates a 3D point cloud representing the planar surface of the phantom, c) the 2D markers are identified in the 2D image, thus obtaining 2D marker data representing the locations of the 2D markers in the 2D image, d) the 2D markers are identified in the 3D point cloud using the property that points on a non-reflective part of the planar surface are identified as having a larger distance to the second camera than points on a reflective part of the planar surface, thus obtaining 3D marker data representing the locations of the 2D markers in a reference system of the second camera, and e) finding the relative position between the first camera and the second camera by applying a Perspective-n-Points algorithm on the 2D marker data and the 3D marker data.
Determining the relative position between a point cloud generating camera and another camera
A method for determining the relative position between a first camera and a second camera used in a medical application, wherein the first camera captures a 2D image of a phantom, the second camera emits light onto the phantom and analyzes the reflected light, thus generating a 3D point cloud representing points on the surface of the phantom, and the phantom has a planar surface forming a background on which a plurality of 2D markers are formed, wherein one of the background and the 2D markers is reflective, thus reflecting light emitted by the second camera back to the second camera, and the other one is non-reflective, thus not reflecting light emitted by the second camera back to the second camera, the method involving that a) the first camera captures a 2D image of the phantom, b) the second camera generates a 3D point cloud representing the planar surface of the phantom, c) the 2D markers are identified in the 2D image, thus obtaining 2D marker data representing the locations of the 2D markers in the 2D image, d) the 2D markers are identified in the 3D point cloud using the property that points on a non-reflective part of the planar surface are identified as having a larger distance to the second camera than points on a reflective part of the planar surface, thus obtaining 3D marker data representing the locations of the 2D markers in a reference system of the second camera, and e) finding the relative position between the first camera and the second camera by applying a Perspective-n-Points algorithm on the 2D marker data and the 3D marker data.
IMAGE PROCESSING DEVICE, METHOD OF GENERATING 3D MODEL, LEARNING METHOD, AND PROGRAM
An imaging unit (43) (first acquisition unit) of a video generation/display device (10a) (image processing device) acquires an image obtained by imaging, at each time, a subject (18) (object) in a situation in which the state of an illumination device (11) changes at each time. An illumination control information input unit (41) (second acquisition unit) acquires the state of the illumination device (11) at each time when the imaging unit (43) captures an image. Then, a foreground clipping processing unit (44a) (clipping unit) clips the subject (18) from the image captured by the imaging unit (43) based on the state of the illumination device (11) at each time acquired by the illumination control information input unit (41). A modeling processing unit (46) (model generation unit) generates a 3D model (18M) of the subject (18) clipped by the foreground clipping processing unit (44a).