Patent classifications
H04N13/00
Moving flying object for scanning an object, and system for analyzing damage to the object
An aircraft that includes a helicopter drone on which a 3D scanner is mounted via an actively rotatable joint is provided. The 3D scanner has at least one high-resolution camera for recording a multiplicity of overlapping images of the object from different recording positions and recording directions, so that comparison of the images allows a position and orientation of the 3D scanner relative to the object to be ascertained. In addition, the aircraft has a coordination device for coordinated control of the 3D scanner, the joint and the helicopter drone. The system for damage analysis has an aircraft and an image processing module generating a data representation of a surface profile of the object on the basis of the recorded images. In addition, the system includes a rating device for checking the surface profile and for outputting a damage statement on the basis of the check.
Moving flying object for scanning an object, and system for analyzing damage to the object
An aircraft that includes a helicopter drone on which a 3D scanner is mounted via an actively rotatable joint is provided. The 3D scanner has at least one high-resolution camera for recording a multiplicity of overlapping images of the object from different recording positions and recording directions, so that comparison of the images allows a position and orientation of the 3D scanner relative to the object to be ascertained. In addition, the aircraft has a coordination device for coordinated control of the 3D scanner, the joint and the helicopter drone. The system for damage analysis has an aircraft and an image processing module generating a data representation of a surface profile of the object on the basis of the recorded images. In addition, the system includes a rating device for checking the surface profile and for outputting a damage statement on the basis of the check.
Securing a monitored zone comprising at least one machine
A safe optoelectronic sensor is provided for securing a monitored zone comprising at least one machine, wherein the sensor has at least one light receiver for generating a received signal from received light from the monitored zone and a control and evaluation unit that is configured to determine distances from objects in the monitored zone from the received signal, and to treat gaps, i.e. safety relevant part regions of the monitored zone in which no reliable distance can be determined, as an object at a predefined distance. The predefined distance here corresponds to a height for securing against reach over.
3D sensor and method of monitoring a monitored zone
A 3D sensor for monitoring a monitored zone is provided, wherein the 3D sensor has at least one light receiver for generating a received signal from received light from the monitored zone and has a control and evaluation unit that is configured to detect objects in the monitored zone by evaluating the received signal and to determine the shortest distance of the detected objects from at least one reference volume, and to read at least one distance calculated in advance from the reference value from a memory for the determination of the respective shortest distance of a detected object.
Image display device including moveable display element and image display method
An image display device includes a processor that sets a location of a virtual image plane on which a virtual image is formed according to depth information included in first image data and generates second image data obtained by correcting the first image data based on the set location of the virtual image plane; an image forming optical system including a display element configured to modulate light to form a display image according to the second image data and a light transfer unit that forms the virtual image on the virtual image plane, the virtual image corresponding to the display image formed by the display element, the light transfer unit comprising a focusing member; and a drive unit that drives the image forming optical system to adjust the location of the virtual image plane.
Detection and ranging based on a single monoscopic frame
One or more stereoscopic images are generated based on a single monoscopic image that may be obtained from a camera sensor. Each stereoscopic image includes a first digital image and a second digital image that, when viewed using any suitable stereoscopic viewing technique, result in a user or software program receiving a three-dimensional effect with respect to the elements included in the stereoscopic images. The monoscopic image may depict a geographic setting of a particular geographic location and the resulting stereoscopic image may provide a three-dimensional (3D) rendering of the geographic setting. Use of the stereoscopic image helps a system obtain more accurate detection and ranging capabilities. The stereoscopic image may be any configuration of the first digital image (monoscopic) and the second digital image (monoscopic) that together may generate a 3D effect as perceived by a viewer or software program.
Scope of coverage indication in immersive displays
An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object.
Scope of coverage indication in immersive displays
An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object.
Methods and apparatus for generating a three-dimensional reconstruction of an object with reduced distortion
Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.
Super-resolution depth map generation for multi-camera or other environments
A method includes obtaining, using at least one processor, first and second input image frames, where the first and second input image frames are associated with first and second image planes, respectively. The method also includes obtaining, using the at least one processor, a depth map associated with the first input image frame. The method further includes producing another version of the depth map by performing one or more times: (a) projecting, using the at least one processor, the first input image frame to the second image plane in order to produce a projected image frame using (i) the depth map and (ii) information identifying a conversion from the first image plane to the second image plane and (b) adjusting, using the at least one processor, at least one of the depth map and the information identifying the conversion from the first image plane to the second image plane.