Monitoring the scan volume of a 3D scanner

11022432 · 2021-06-01

Assignee

Inventors

Cpc classification

International classification

Abstract

Disclosed is 3D scanning using a 3D scanner configured for detecting when the scanned object is at rest in the scan volume of the 3D scanner.

Claims

1. A 3D scanner for scanning objects placed by an operator in a scan volume of the 3D scanner, where the 3D scanner comprises: an optical scanning unit configured for recording 3D geometry data of an object placed in the scan volume, the optical scanning unit comprising: a light source arranged to project a beam of probe light into the scan volume, and an image acquisition unit comprising at least two 2D cameras arranged to record 2D images of light received from the object placed in the scan volume; the scan volume being defined by at least a part of an overlapping section of cones representing fields of view of the individual 2D cameras, and a control unit comprising a data processor and a non-transitory computer readable medium encoded with a computer program product comprising readable program code being executable by the processor to cause the processor to: monitor the scan volume while the operator moves the object into place in the scan volume, detect movement of the object in the scan volume in the step of monitoring the scan volume by analyzing monitoring data acquired for the object at different points in time, initiate 3D scanning when the object is determined to be at rest in the scan volume, and generate and record a digital 3D representation of the object from the recorded geometry data after the step of initiating 3D scanning for later use.

2. The 3D scanner according to claim 1, wherein the readable program code is executable by the processor and is configured to cause the processor to: acquire a monitoring data at least for a foreign object, present in the scan volume, at different points in time; detect movement at least of the foreign object present in the scan volume by analyzing the monitoring data acquired at least for the foreign object at different points in time; and prevent the initiation of 3D scanning of the object when the foreign object is determined to be in motion in the scan volume.

3. The 3D scanner according to claim 1, wherein preventing the initiation of the 3D scanning of the object comprises preventing the initiation of 3D scanning of the object when the object is determined to be at rest in the scan volume and the foreign object is determined to be motion in the scan volume.

4. The 3D scanner according to claim 2, wherein preventing the initiation of the 3D scanning of the object comprises preventing the initiation of 3D scanning of the object when the object is determined to be at rest in the scan volume and the foreign object is determined to be motion in the scan volume.

5. The 3D scanner according to claim 1, wherein the monitoring data is recorded by capturing ambient light reflected by the object placed in the scan volume and/or the foreign object present in the scan volume.

6. The 3D scanner according to claim 1, wherein the monitoring data comprises monitoring 2D images of the object and foreign object recorded at different points in time when the foreign object is present in the scan volume.

7. The 3D scanner according to claim 1, wherein the monitoring data comprises monitoring 2D images of the object recorded at different points in time when the foreign object is absent from the scan volume.

8. The 3D scanner according to claim 1, wherein the monitoring data acquired for the object represents position and/or orientation of the object in the scan volume.

9. The 3D scanner according to claim 1, wherein analyzing the monitoring data comprises comparing the monitoring 2D images recorded at different points in time.

10. The 3D scanner according to claim 9, wherein the comparing the monitoring 2D images recorded at different points in time comprises comparing pixel values of pixels in the monitoring 2d images.

11. The 3D scanner according to claim 1, wherein the object is determined to be at rest in the scan volume when a difference between the compared pixel values is below a first predefined threshold value.

12. The 3D scanner according to claim 2, wherein the foreign object is determined to be in motion in the scan volume when a difference between the compared pixel values is above a second predefined threshold value.

13. The 3D scanner according to claim 1, wherein analyzing the monitoring data acquired for the object comprises deriving information relating to the position and/or orientation of the object in the scan volume from the acquired monitoring data and comparing the information derived for the monitoring data acquired at different points in time.

14. The 3D scanner according to claim 5, wherein the comparing comprises analyzing the monitoring 2D images using an image analysis algorithm.

15. The 3D scanner according to claim 14, wherein the image analysis algorithm comprises at least one of a feature recognition algorithm or an edge detection algorithm.

16. The 3D scanner according to claim 1, wherein the image acquisition unit is configured to record both the monitoring 2D images for the object and/or foreign object and scanning 2D images that are recorded during the 3D scanning for generation of the digital 3D representation of the object.

17. The 3D scanner according to claim 1, wherein analyzing the monitoring data comprises generating monitoring digital 3D representations of the object and/or foreign object from the monitoring data acquired for different points in time; and comparing relative arrangement of the generated monitoring digital 3D representations.

18. A computer program product embodied in a non-transitory computer readable medium, the computer program product comprising computer readable program code being executable by a hardware data processor for a 3D scanner having a light source arranged to project a beam of probe light into a scan volume and to cause the 3D scanner to perform a method comprising: detecting movement of an object placed in the scan volume of the 3D scanner by analyzing monitoring data acquired for the object at different points in time; determining whether the object is at rest in the scan volume; initiating 3D scanning when the object is determined to be at rest in the scan volume; performing the 3D scanning by using at least two 2D cameras arranged to record 2D images of light received from the object placed in the scan volume, the scan volume being defined by at least a part of an overlapping section of cones representing fields of view of the individual 2D cameras; generating and recording a digital 3D representation of the object from geometry data recorded during the 3D scanning for later use; and visualizing the generated digital 3D representation.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The embodiments of the disclosure, together with its advantages, may be best understood from the following illustrative and non-limiting detailed description taken in conjunction with the accompanying figures in which:

(2) FIG. 1 illustrates a method for generating a 3D representation of the object according to an embodiment;

(3) FIG. 2 illustrates a method for generating a 3D representation of the object according to an embodiment;

(4) FIG. 3 illustrates a method for generating a 3D representation of the object according to an embodiment;

(5) FIG. 4 illustrates a 3D scanner according to an embodiment; and

(6) FIG. 5 illustrates a scan volume according to an embodiment.

DETAILED DESCRIPTION

(7) In the following description, reference is made to the accompanying figures, which show by way of illustration how the invention may be practiced.

(8) FIG. 1 illustrates a method (workflow) for generating a 3D representation of the object such as a teeth model according to an embodiment. The 3D scanner is configured to monitor the scan volume (FIG. 5, 508). The workflow 100 includes a monitoring part 102 and a 3D scanning part 110. The monitoring part 102 includes workflow relating to initiation of the 3D scanning process, and the 3D scanning part 110 includes workflow relating to generation of 3D representation of the object.

(9) At 104, a monitoring data comprising one or more monitoring 2D images of the object, which is present in the scan volume is acquired at different points in time. At 106, the monitoring 2D images are analyzed for detection of movement of the object in the scan volume. At 108, a determination is made, whether the object in the scan volume is at rest or in motion. The analysis and determination may be handled by a control unit of the 3D scanner which executes instructions for the analysis and comparison. If the object is determined to be in motion, a further one or more monitoring 2D images of the object are recorded in an iterative way until the object is determined to be at rest at 108. If the object is determined to be at rest, then the workflow proceeds to the scanning part 110, i.e. 3D scanning of the object is initiated. At 112, a plurality of scanning 2D images of the object placed in the scan volume is recorded and based on the plurality of the scanning 2D images of the object, a digital 3D representation of the object is generated.

(10) The same image acquisition unit of the 3D scanner may be configured to record both the monitoring 2D images during the monitoring stage and the scanning 2D images during the scanning stage.

(11) FIG. 2 illustrates a method for generating a 3D representation of the object according to an embodiment. Typically, a foreign object such as an operator hand is in the scan volume when the object is being positioned in the scan volume. It is useful to avoid initiation of 3D scanning while the foreign object is still in the scan volume. Therefore, a workflow 202 may be used. The workflow includes at 204, a monitoring data comprising one or more monitoring 2D images of at least the foreign object, present in the scan volume, is acquired. Such acquisition of the monitoring data is at different points in time. At 206, the monitoring 2D images are analyzed to detect if a motion of the foreign object in the scan volume is detected. At 208, a determination is made whether the foreign object is in motion. If so, then at 210, initiation of 3D scanning of the object is prevented and further one or more monitoring 2D images are recorded at 204 in an iterative manner until a negative determination is made that the foreign object is in motion. The analysis and determination may be handled by a control unit of the 3D scanner which executes instructions for the analysis and comparison.

(12) According to an embodiment, the method disclosed in preceding paragraph may further include additional workflow where the negative determination is followed by determining, at 102, whether the object is at rest and at 110, generating a 3D representation of the object. Workflow elements 102 and 110 include steps that are explained earlier in relation to FIG. 1. A combination of these additional workflow elements allow for preventing initiation of the 3D scanning of the object when the object is determined to be at rest in the scan volume and the foreign object is determined to be motion in the scan volume.

(13) FIG. 3 illustrates a method for generating a 3D representation of the object according to an embodiment. The work flow 300 includes a monitoring stage 302 and a 3D scanning stage 312.

(14) When the object is in the scan volume of a 3D scanner, such as the 3D scanner illustrated in FIG. 2, the image acquisition unit of the scanner is capable of recording 2D images of the object. During the monitoring, one or more monitoring 2D images are recorded for different times, which may be defined by a sequence of time points ( . . . t.sub.i−1, t.sub.i, t.sub.i+1 . . . ). In the scanner of FIG. 2, the same image acquisition unit records both the monitoring 2D images and the scanning 2D images.

(15) The scan volume is monitored while the operator moves the object into place in the scan volume. At 304, two or more monitoring 2D images are recorded from the scan volume at different time points t.sub.i and t.sub.i+1. These monitoring 2D images are then analyzed at 306 to determine the position and orientation of the object. For example the perimeter of the object may be detected in each of the monitoring 2D images.

(16) It may then be determined whether or not the objects is at rest by comparing the position and/or orientation of the object at 308. If there is no change or change within a predefined threshold, it may be concluded that the object is at rest. The analysis and comparison are handled by a control unit of the 3D scanner which executes instructions for the analysis and comparison.

(17) One way to compare two 2D images is to form their difference, either for a single color channel as in grayscale images, or for several as in color images, the latter case resulting in a set of difference images, one for each color channel.

(18) One way to assess sets of difference images for various color channels is to transform colors into another color space, for example hue-saturation-lightness, and then subtract only one or two of the color components in that color space. If only one such component is used, the procedure is equivalent to that for grayscale images.

(19) One way to compute a difference indicator from 2D images is to subtract intensity values of corresponding pixels and form a mean value for all pixels.

(20) One way to detect a change such as an object entering the scanner is to compare the difference indicator against a threshold value. The threshold value can be found from calibration and could be set by the user.

(21) One way to increase the robustness of change detection in 2D images is to consider m images, where m>2, evaluate the m−1 difference images of consecutive images, from the difference indicator for each, and apply a smoothing filter such as a running-average filter. In this way, one spurious image has smaller likelihood of causing a false change detection.

(22) One way to increase detectability is to process several regions of the 2D images separately. For example, when a small object is placed inside the scanner, especially in a small area appearing near the boundary of the 2D images, the overall mean of all pixels of a pair of difference images may be rather small and remain under the threshold value. For the same example, when processing several regions in the same manner as the full images, but individually, at least one region will likely show a difference indicator larger than the threshold, and this can be taken as an indication of change.

(23) If it is found that the object has moved or rotated between t.sub.i and t.sub.i+1, the monitoring steps 102, 103 and 104 are repeated. This is continued until it is concluded at 310 that the object is at rest and the 3D scanning part 312 of the workflow may be initiated.

(24) In the 3D scanning 312, a number of scanning 2D images are recorded at 314. When the scanner operates e.g. by triangulation a number of scanning 2D images are recorded where the position of the probe light beam on the object surface varies from one image to another.

(25) In step 316, a digital 3D representation of the object is generated from the recorded series of scanning 2D images. This can be done using computer implemented algorithms for e.g. creating partial digital 3D representations of the object from the scanning 2D images and stitching together partial digital 3D representations of the surface obtained from different views. The stitching can be performed using an Iterative Closest Point (ICP) algorithm employed to minimize the difference between the two partial digital 3D representations.

(26) FIG. 4 illustrates a 3D scanner system 400 according to an embodiment.

(27) The 3D scanner contains a 3D scanning unit 426 having an illumination unit 402 configured to provide a beam of probe light 406 which is projected onto the scanned object 404 arranged in the scan volume of the 3D scanner. The illumination unit has a light source, such as a LED or an array of LEDs, arranged to provide the probe light. The probe light may be spatially structured, such as having a checkerboard pattern or line pattern, and may be monochromatic or colored. In this example, the scanned object is a partial teeth model 404. The image acquisition unit, by a non-limiting example, includes two 2D cameras 410 arranged to receive light 408 reflected from the teeth model 404 such that 2D images of the reflected light are recorded.

(28) The 3D scanner may have an optical system configured for guiding the probe light from the illumination unit towards the teeth model arranged in the scan volume and for receiving light reflected from the scanned teeth model and guiding it towards the image acquisition unit.

(29) The control unit 422 includes a data processing unit 414 and a non-transitory computer readable medium 416 encoded with a computer program product with instructions for analyzing monitoring 2D images to determine when the teeth model is at rest in the scan volume and generating a digital 3D representation from the scanning 2D images recorded during the 3D scanning. The non-transitory computer readable medium 416 may also be encoded with a computer program product with instructions for analyzing monitoring 2D images to determine when the foreign object in the scan volume is in motion.

(30) During the monitoring of the scan volume, the control unit 422 is configured to instruct the 3D scanning unit 211 to record one or more monitoring 2D images of the object and/or foreign object using the cameras 410 of the image acquisition unit. The monitoring 2D images may be recorded using ambient light such that the illumination unit 406 is inactive while the monitoring 2D images are recorded. The housing of the illustrated scanning unit is open such that ambient light may illuminate the teeth model 404 when the monitoring 2D image is recorded. The recorded monitoring 2D images of the object are transferred to the control unit 422 where the data processing unit 414, e.g. a microprocessor, is configured to execute instructions for analyzing the monitoring 2D images to determine whether the teeth model is at rest, for example if the teeth model is arranged at the same position and orientation for at least two points in time. When it is determined that the teeth model is at rest, the 3D scanning is initiated. Additionally, the recorded monitoring 2D images of the foreign object are transferred to the control unit 422 where the data processing unit 414, e.g. a microprocessor, is configured to execute instructions for analyzing the monitoring 2D images to determine whether the foreign object is in motion.

(31) During the 3D scanning the control unit 414 is configured to instruct the 3D scanning unit 426 to record a series of scanning 2D images of the teeth model. The recorded scanning 2D images are transferred to the control unit 422 where the digital 3D representation of the teeth model is generated.

(32) When a structured probe light beam is used, the light pattern is detected in the acquired scanning 2D images and well-established projection geometry such as triangulation or stereo is used to derive the 3D coordinates for the teeth model surface illuminated by the bright parts of the pattern. This is done for a sequence of different relative positions of the teeth model 404 and the 3D scanning unit 426.

(33) The control unit may include any device or combination of devices that allows the data processing to be performed. The control unit may be a general purpose computer capable of running a wide variety of different software applications or a specialized device limited to particular functions. The control unit may include any type, number, form, or configuration of processors, system memory, computer-readable mediums, peripheral devices, and operating systems. In one embodiment, the computer includes a personal computer (PC), which may be in the form of a desktop, laptop, pocket PC, personal digital assistant (PDA), tablet PC, or other known forms of personal computers. At least one access device and/or interface that allow the operator to utilize the functionality of the control unit. The access device and/or interface can include but is not limited to a keyboard 229, mouse 230, a graphical user interface (GUI) displayed in a display screen 231, and other known input or output devices and interfaces.

(34) FIG. 5 illustrates a scan volume according to an embodiment. The 3D scanning unit 426 includes an image acquisition unit, which by a non-limiting example, includes two 2D cameras 410 and an illumination unit 402. The scan volume may be defined by an overlapping section 508 of cones (502, 504) representing field of views of individual 2D cameras 410. In some embodiments, the scan volume is only a part of the overlapping section 508 and not the entire overlapping section.

(35) Although some embodiments have been described and shown in detail, the disclosure is not restricted to them, but may also be embodied in other ways within the scope of the subject matter defined in the following claims. In particular, it is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the disclosure.

(36) A claim may refer to any of the preceding claims, and “any” is understood to mean “any one or more” of the preceding claims.

(37) It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.

(38) In 3D scanner claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or described in different embodiments does not indicate that a combination of these measures cannot be used to advantage.

(39) It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” or features included as “may” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure.

(40) The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.