G06T7/529

METHOD FOR DEPICTING AN OBJECT

The invention relates to technologies for visualizing a three-dimensional (3D) image. According to the claimed method, a 3D model is generated, images of an object are produced, a 3D model is visualized, the 3D model together with a reference pattern and also coordinates of texturing portions corresponding to polygons of the 3D model are stored in a depiction device, at least one frame of the image of the object is produced, the object in the frame is identified on the basis of the reference pattern, a matrix of conversion of photo image coordinates into dedicated coordinates is generated, elements of the 3D model are coloured in the colours of the corresponding elements of the image by generating a texture of the image sensing area using the coordinate conversion matrix and data interpolation, with subsequent designation of the texture of the 3D model.

Guide-assisted capture of material data

A material data collection system allows capturing of material data. For example, the material data collection system may include digital image data for materials. The material data collection system may ensure that captured digital image data is properly aligned, so that material data may be easily recalled for later use, while maintaining the proper alignment for the captured digital image. The material data collection system may include using a capture guide, to provide cues on how to orient a mobile device used with the material data collection system.

Guide-assisted capture of material data

A material data collection system allows capturing of material data. For example, the material data collection system may include digital image data for materials. The material data collection system may ensure that captured digital image data is properly aligned, so that material data may be easily recalled for later use, while maintaining the proper alignment for the captured digital image. The material data collection system may include using a capture guide, to provide cues on how to orient a mobile device used with the material data collection system.

Position and attitude estimation device, position and attitude estimation method, and storage medium

According to one embodiment, a position and attitude estimation device includes a processor. The processor is configured to acquire time-series images continuously captured by a capture device installed on a mobile object, estimate first position and attitude of the mobile object based on the acquired time-series images, estimate a distance to a subject included in the acquired time-series images and correct the estimated first position and attitude to a second position and attitude based on an actual scale, based on the estimated distance.

SINGLE-VIEW FEATURE-LESS DEPTH AND TEXTURE CALIBRATION

A method and apparatus for performing a single view depth and texture calibration are described. In one embodiment, the apparatus comprises a calibration unit operable to perform a single view calibration process using a captured single view a target having a plurality of plane geometries having detectable features and being at a single orientation and to generate calibration parameters to calibrate one or more of the projector and multiple cameras using the single view of the target.

Method and system for diffusing color error into additive manufactured objects

A method of processing data for additive manufacturing of a 3D object comprises: receiving graphic elements defining a surface of the object, and an input color texture to be visible over a surface of the object; transforming the elements to voxelized computer object data; constructing a 3D color map having a plurality of pixels, each being associated with a voxel and being categorized as either a topmost pixel or an internal pixel. Each topmost pixel is associated with a group of internal pixels forming a receptive field for the topmost pixel. A color-value is assigned to each topmost pixel and each internal pixel of a receptive field associated with the topmost pixel, based on the color texture and according to a subtractive color mixing. A material to be used during the additive manufacturing is designated based on the color-value.

Method and system for diffusing color error into additive manufactured objects

A method of processing data for additive manufacturing of a 3D object comprises: receiving graphic elements defining a surface of the object, and an input color texture to be visible over a surface of the object; transforming the elements to voxelized computer object data; constructing a 3D color map having a plurality of pixels, each being associated with a voxel and being categorized as either a topmost pixel or an internal pixel. Each topmost pixel is associated with a group of internal pixels forming a receptive field for the topmost pixel. A color-value is assigned to each topmost pixel and each internal pixel of a receptive field associated with the topmost pixel, based on the color texture and according to a subtractive color mixing. A material to be used during the additive manufacturing is designated based on the color-value.

Multi-channel depth estimation using census transforms

A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.

Multi-channel depth estimation using census transforms

A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.

SYSTEMS AND METHODS FOR VISION TEST AND USES THEREOF

Systems and methods for vision test and uses thereof are disclosed. A method may be implemented on a mobile device having at least a processor, a camera and a display screen. The method may include capturing at least one image of a user using the camera of the mobile device; interactively guiding the user to a predetermined distance from the display screen of the mobile device based on the at least one image; presenting material on the display screen upon a determination that the user is at the predetermined distance from the display screen; and receiving input from the user in response to the material presented on the display screen. The material presented on the display screen may be for assessing at least one characteristic of the user's vision. Mobile devices and non-transitory machine-readable mediums having machine-executable instructions embodied thereon for assessing a user's vision also are disclosed.