Patent classifications
H04N2013/0074
METHOD AND DEVICE FOR RECOGNISING DISTANCE IN REAL TIME
The invention relates to a device for recognising distance in real time including first and second cameras. The device also includes a third camera arranged nearer the first camera than the second camera. The first, second and third cameras acquire simultaneously first, second and third images respectively. The device also includes an electronic circuit that estimates the distance of an object as a function of a stereoscopic correspondence established between the first and second elements representative of the object and belonging to the first and second images respectively. The stereoscopic correspondence is established by taking into account a relationship between the first elements and corresponding third elements belonging to the third image.
Methods and apparatus for generating a three-dimensional reconstruction of an object with reduced distortion
Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.
Infant monitoring system
A monitoring system includes a sensor, a processing block and an alert generator. The sensor is used to generate images of an infant. The processing block processes one or more of the images and identifies a condition with respect to the infant. Then alert generator causes generation of an alert signal if the identified condition warrants external notification. In an embodiment, the sensor is a 3D camera, and the 3D camera, the processing block and the alert generator are part of a unit placed in the vicinity of the infant. The monitoring system further includes microphones and motion sensors to enable detection of sounds and movement.
APPARATUS AND METHOD FOR MEASURING POSITION OF STEREO CAMERA
An apparatus and method for measuring the position of a stereo camera. The apparatus for measuring a position of the camera according to an embodiment includes a feature point extraction unit for extracting feature points from images captured by a first camera and a second camera and generating a first feature point list based on the feature points, a feature point recognition unit for extracting feature points from images captured by the cameras after the cameras have moved, generating a second feature point list based on the feature points, and recognizing actual feature points based on the first feature point list and the second feature point list, and a position variation measurement unit for measuring variation in positions of the cameras based on variation in relative positions of the actual feature points.
Method for identification of contamination upon a lens of a stereoscopic camera
A method for identifying contamination upon a lens of a stereoscopic camera is disclosed. The stereoscopic camera is arranged such that it has the same capturing area over time, and is provided with a first camera providing first images of said capturing area and a second camera providing second images of said capturing area. The first and second images are divided into at least one evaluation area correspondently located in respective image. A traffic surveillance system is also disclosed where contamination upon a lens of a stereoscopic camera is identified according to said method.
SYSTEM AND METHOD FOR SIMULTANEOUS CONSIDERATION OF EDGES AND NORMALS IN IMAGE FEATURES BY A VISION SYSTEM
This invention applies dynamic weighting between a point-to-plane and point-to-edge metric on a per-edge basis in an acquired image using a vision system. This allows an applied ICP technique to be significantly more robust to a variety of object geometries and/or occlusions. A system and method herein provides an energy function that is minimized to generate candidate 3D poses for use in alignment of runtime 3D image data of an object with model 3D image data. Since normals are much more accurate than edges, the use of normal is desirable when possible. However, in some use cases, such as a plane, edges provide information in relative directions the normals do not. Hence the system and method defines a normal information matrix, which represents the directions in which sufficient information is present. Performing (e.g.) a principal component analysis (PCA) on this matrix provides a basis for the available information.
GENERATING ADAPTIVE THREE-DIMENSIONAL MESHES OF TWO-DIMENSIONAL IMAGES
Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.
APPARATUS, METHOD AND COMPUTER PROGRAM FOR PERFORMING OBJECT RECOGNITION
An apparatus for performing object recognition includes an image camera to capture a first resolution image and a depth map camera to capture a second resolution depth map. The first resolution is greater than the second resolution. The apparatus is configured to perform object recognition based on the image and the depth map.
IMAGE PROCESSING DEVICE, DISPLAY DEVICE, CONTROL METHOD FOR IMAGE PROCESSING DEVICE, AND CONTROL PROGRAM
An image from which a three-dimensional sensation that is close to the actual viewing in the natural world is acquired is generated with simple processing. A controller of an image processing device includes: an attention area specifying unit that specifies an attention area out of a plurality of areas formed by dividing an image to be displayed by a display in the vertical direction thereof on a basis of the fixation point detected by a fixation point detecting sensor; and an image processor that generates a post-processing image by performing emphasis processing for at least a part of the attention area with respect to the image.
Method and system for improving detail information in digital images
Various aspects of a method and a system for image processing are disclosed herein. The method includes processing an input image, which comprises structure information and detail information, using image processing (IP) blocks in an image processing pipeline. One or more of the IP blocks, such as the lossy IP blocks, process the input image with at least a partial loss of the detail information. By replacing the lossy IP blocks with redesigned image processing (IP) modules, the image processing pipeline reduces or avoids such loss of the detail information. A more efficient implementation of the improved pipeline is realized by using a master IP module when the lossy IP blocks are reordered and grouped together in the image processing pipeline. The method is further extended to process 3-D images to reduce or avoid loss of detail information in a 3-D image processing pipeline.