Detection of a movable object when 3D scanning a rigid object

10064553 · 2018-09-04

Assignee

Inventors

Cpc classification

International classification

Abstract

Detecting a movable object in a location includes providing a first 3D representation of at least part of a surface; providing a second 3D representation of at least part of the surface; determining for the first 3D representation a first excluded volume in space where no surface can be present; determining for the second 3D representation a second excluded volume in space where no surface can be present; if a portion of the surface in the first 3D representation is located in space in the second excluded volume, the portion of the surface in the first 3D representation is disregarded in the generation of the virtual 3D model, and/or if a portion of the surface in the second 3D representation is located in space in the first excluded volume, the portion of the surface in the second 3D representation is disregarded in the generation of the virtual 3D model.

Claims

1. A method for detecting a movable object in a location, when scanning a rigid object in the location by means of a 3D scanner for generating a virtual 3D model of the rigid object, wherein the method comprises: providing a first 3D representation of at least part of a surface by scanning at least part of the location; providing a second 3D representation of at least part of the surface by scanning at least part of the location; determining for the first 3D representation a first excluded volume in space where no surface can be present in both the first 3D representation and the second 3D representation; and if a portion of the surface in the second 3D representation is located in space in the first excluded volume, the portion of the surface in the second 3D representation is disregarded in the generation of the virtual 3D model.

2. The method according to claim 1, wherein the rigid object is a patient's set of teeth, and the location is the mouth of the patient.

3. The method according to claim 1, wherein the movable object is a soft tissue part of a patient's mouth.

4. The method according to claim 1, wherein the movable object is a dentist's instrument or remedy which is temporarily present in a patient's mouth.

5. The method according to claim 1, wherein the movable object is a finger.

6. The method according to claim 1, wherein at least part of the surface captured in the first representation and at least part of the surface captured in the second representation are overlapping the same surface part on the rigid object.

7. The method according to claim 1, wherein the first representation of at least part of the surface is defined as the first representation of at least a first part of the surface, and the second representation of at least part of the surface is defined as the second representation of at least a second part of the surface.

8. The method according to claim 1, wherein the method comprises determining a first scan volume in space related to the first representation of at least part of the surface, and determining a second scan volume in space related to the second representation of at least part of the surface.

9. The method according to claim 8, wherein the first scan volume and the second scan volume are defined by focusing optics in the 3D scanner and a distance to the surface which is captured.

10. The method according to claim 8, wherein the first scan volume related to the first representation of at least part of the surface is the volume in space between focusing optics of the 3D scanner and the surface captured in the first representation; and the second scan volume related to the second representation of at least part of the surface is the volume in space between focusing optics of the 3D scanner and the surface captured in the second representation.

11. The method according to claim 8, wherein the first excluded volume and second excluded volume in space where no surface can be present corresponds to the first scan volume and the second scan volume, respectively.

12. The method according to claim 1, wherein the volume of the 3D scanner itself is defined as an excluded volume.

13. The method according to claim 1, wherein a near threshold distance is defined, which determines a distance from a captured surface in the first representation and the second representation, where a surface portion in the second representation or the first representation, respectively, which is located within the near threshold distance from the captured surface and which is located in space in the first excluded volume or in the second excluded volume, respectively, is not disregarded in the generation of the virtual 3D model.

14. The method according to claim 1, wherein a far threshold distance is defined, which determines a distance from a captured surface, where the volume outside the far threshold distance is not included in the excluded volume of a representation.

15. The method according to claim 1, wherein the first representation of at least part of a surface is a first subscan of at least part of the location, and the second representation of at least part of the surface is a second subscan of at least part of the location.

16. The method according to claim 1, wherein the first representation of at least part of a surface is a provisional virtual 3D model comprising subscans of the location acquired already, and the second representation of at least part of the surface is a second subscan of at least part of the location.

17. The method according to claim 16, wherein acquired subscans of the location are adapted to be added to the provisional virtual 3D model concurrently with the acquisition of the subscans.

18. The method according to claim 1, wherein the movable object is an inside of a cheek, a tongue, lips, gums and/or loose gingival.

19. The method according to claim 1, wherein the movable object is a dental suction device, cotton rolls, and/or cotton pads.

20. The method according to claim 1, wherein: the first excluded volume is determined to be a space where no surface of the rigid object is present.

21. A nontransitory computer readable medium encoded with a computer program product comprising program code for causing a data processing system to perform the method of claim 1, when said program code is executed on the data processing system.

22. A system for detecting a movable object in a location, when scanning a rigid object in the location by means of a 3D scanner for generating a virtual 3D model of the rigid object, wherein the system comprises a hardware processor configured to: provide a first 3D representation of at least part of a surface by scanning at least part of the location; provide a second 3D representation of at least part of the surface by scanning at least part of the location; determine for the first 3D representation a first excluded volume in space where no surface can be present in both the first 3D representation and the second 3D representation; and disregard the portion of the surface in the second 3D representation in the generation of the virtual 3D model, if a portion of the surface in the second 3D representation is located in space in the first excluded volume.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The above and/or additional objects, features and advantages of the present invention, will be further elucidated by the following illustrative and non-limiting detailed description of embodiments of the present invention, with reference to the appended drawings, wherein:

(2) FIG. 1 shows an example of a flowchart of the method for detecting a movable object in a location, when scanning a rigid object in the location by means of a 3D scanner for generating a virtual 3D model of the rigid object.

(3) FIG. 2 shows an example of a scan head of an intraoral 3D scanner scanning a set of teeth.

(4) FIG. 3 shows an example of a handheld 3D scanner.

(5) FIG. 4 shows an example of a section of teeth in the mouth which can be covered in a sub-scan.

(6) FIG. 5 shows an example of how the different sub-scans generating 3D surfaces are distributed across a set of teeth.

(7) FIGS. 6A-E show an example of registering/aligning representations of 3D surfaces and compensating for motion in a 3D surface.

(8) FIG. 7 shows an example of a 3D surface where overlapping sub-scans are indicated.

(9) FIG. 8 shows an example of excluded volume.

(10) FIGS. 9A-9B show an example of scanning a tooth and acquiring a first and a second representation of the surface of the tooth, where no movable object is present.

(11) FIGS. 10A-10B show an example of scanning a tooth and acquiring a first and a second representation of the surface of the tooth, where a movable object is captured in part of the first representation.

(12) FIGS. 11A-11B show an example of scanning a tooth and acquiring a first and a second representation of the surface of the tooth, where a movable object is captured in the second representation.

(13) FIGS. 12A-12C show an example of acquiring a first and a second representation of the surface of an object, e.g. a tooth, where a movable object is captured in the first representation.

(14) FIGS. 13A-13E show an example of acquiring a first and a second representation of a surface of an object, where no movable object is present.

(15) FIGS. 14A-14E show an example of acquiring a first and a second representation of a surface of an object, where a movable object of the second representation is present in the excluded volume of the first representation.

(16) FIGS. 15A-15E show an example of acquiring a first and a second representation of a surface of an object, where a possible movable object is present in the second representation, but not in the excluded volume of the first representation.

(17) FIG. 16 shows an example of a near threshold distance defining how far from the representation possible movable objects are disregarded in the generation of the virtual 3D model.

(18) FIG. 17 shows an example of how the excluded volume is determined.

(19) FIGS. 18A-18B show examples of how movable objects can look in subscans.

(20) FIG. 19 shows an example of a pinhole scanner.

(21) FIGS. 20A-20D show examples of the principle of a far threshold distance from the captured surface defining a volume which is not included in the excluded volume of a representation.

DETAILED DESCRIPTION

(22) In the following description, reference is made to the accompanying figures, which show by way of illustration how the invention may be practiced.

(23) FIG. 1 shows an example of a flowchart of the method for detecting a movable object in a location, when scanning a rigid object in the location by means of a 3D scanner for generating a virtual 3D model of the rigid object. In step 101 a first 3D representation of at least part of a surface is provided by scanning at least part of the location.

(24) In step 102 a second 3D representation of at least part of the surface is provided by scanning at least part of the location.

(25) In step 103 a first excluded volume in space where no surface can be present is determined for the first 3D representation.

(26) In step 104 a second excluded volume in space where no surface can be present is determined for the second 3D representation.

(27) In step 105 a portion of the surface in the first 3D representation is disregarded in the generation of the virtual 3D model, if the portion of the surface in the first 3D representation is located in space in the second excluded volume, and/or a portion of the surface in the second 3D representation is disregarded in the generation of the virtual 3D model, if the portion of the surface in the second 3D representation is located in space in the first excluded volume.

(28) FIG. 2 shows an example of a scan head of an intraoral 3D scanner scanning a set of teeth. An intraoral handheld 3D scanner (not shown) comprising a scan head 207 is scanning a tooth 208. The scanning is performed by transmitting light rays on the tooth 208. The light rays forms a scan volume 211, which is cone shaped in this example.

(29) The length 203 of the scan volume 211, i.e. the distance from the opening 202 of the scan head to the end of the scan volume may be for example about 5 mm, 10 mm, 15 mm, 16 mm, 17 mm, 18 mm, 19 mm, 20 mm, 25 mm, 30 mm.

(30) The scan volume may be about 20 mm20 mm.

(31) FIG. 3 shows an example of a handheld 3D scanner.

(32) The handheld scanner 301 comprises a light source 302 for emitting light, a beam splitter 304, movable focus optic 305, such as lenses, an image sensor 306, and a tip or probe 307 for scanning an object 308. In this example the object 308 is teeth in an intra oral cavity.

(33) The scanner comprises a scan head or tip or probe 307 which can be entered into a cavity for scanning an object 308. The light from the light source 302 travels back and forth through the optical system. During this passage the optical system images the object 308 being scanned onto the image sensor 306. The movable focus optics comprises a focusing element which can be adjusted to shift the focal imaging plane on the probed object 308. One way to embody the focusing element is to physically move a single lens element back and forth along the optical axis. The device may include polarization optics and/or folding optics which directs the light out of the device in a direction different to the optical axis of the lens system, e.g. in a direction perpendicular to the optical axis of the lens system. As a whole, the optical system provides an imaging onto the object being probed and from the object being probed to the image sensor, e.g. camera. One application of the device could be for determining the 3D structure of teeth in the oral cavity. Another application could be for determining the 3D shape of the ear canal and the external part of the ear.

(34) The optical axis in FIG. 3 is the axis defined by a straight line through the light source, optics and the lenses in the optical system. This also corresponds to the longitudinal axis of the scanner illustrated in FIG. 3. The optical path is the path of the light from the light source to the object and back to the camera. The optical path may change direction, e.g. by means of beam splitter and folding optic.

(35) The focus element is adjusted in such a way that the image on the scanned object is shifted along the optical axis, for example in equal steps from one end of the scanning region to the other. A pattern may be imaged on the object, and when the pattern is varied in time in a periodic fashion for a fixed focus position then the in-focus regions on the object will display a spatially varying pattern. The out-of-focus regions will display smaller or no contrast in the light variation. The 3D surface structure of the probed object may be determined by finding the plane corresponding to an extremum in the correlation measure for each sensor in the image sensor array or each group of sensor in the image sensor array when recording the correlation measure for a range of different focus positions. Preferably one would move the focus position in equal steps from one end of the scanning region to the other. The distance from one end of the scanning region to the other may be such as 5 mm, 10 mm, 15 mm, 16 mm, 20 mm, 25 mm, 30 mm etc.

(36) FIG. 4 shows an example of a section of teeth in the mouth which can be covered in a sub-scan. In FIG. 4a) the teeth 408 are seen in a top view, and in FIG. 4b) the teeth 408 are seen in a perspective view.

(37) An example of the scan volume 411 for one sequence of focus plane images is indicated by the transparent box. The scan volume may be such as 171520 mm, where the 15 mm may be the height of the scan volume corresponding to the distance the focus optics can move.

(38) FIG. 5 shows an example of how the different sub-scans generating 3D surfaces is distributed across a set of teeth.

(39) Four sub-scans 512 are indicated on the figure. Each sub-scan provides a 3D surface of the scanned teeth. The 3D surfaces may be partly overlapping, whereby a motion of the scanner performed during the acquisition of the sub-scans can be determined by comparing the overlapping parts of two or more 3D surfaces.

(40) FIG. 6 shows an example of registering/aligning representations of 3D surfaces and compensating for motion in a 3D surface.

(41) FIG. 6a) shows a 3D surface 616, which for example may be generated from a number of focus plane images.

(42) FIG. 6b) shows another 3D surface 617, which may have been generated in a subsequent sequence of focus plane images.

(43) FIG. 6c) shows the two 3D surface 616, 617 are attempted to be aligned/registered. Since the two 3D surfaces 616, 617 have 3D points which correspond to the same area of a tooth, it is possible to perform the registration/alignment by ICP, by comparing the corresponding points in the two 3D surfaces etc.

(44) FIG. 6d) shows the resulting 3D surface 618 when the two 3D surfaces 616, 617 have been merged together.

(45) FIG. 6e) shows that based on the resulting 3D surface 618 the relative motion performed by the scanner during the acquisition of the sub-scans or focus plane images generating 3D surface 616 and 617 can be determined, and based on this determined motion the resulting 3D surface 618 can be corrected to a final correct 3D surface 619.

(46) FIG. 7 shows an example of a 3D surface where overlapping sub-scans are indicated.

(47) A number of 3D representations or sub-scans are indicated by the numbers 1-11 and the subdivision markers 712 on a 3D surface 713. The subdivision markers 712 for sub-scans 1, 3, 5, 7, 9, and 11 are with dotted lines, and the subdivision markers for sub-scan 2, 4, 6, 8, 10 are marked with full lines. The sub-scans are all overlapping with the same distance, but the overlapping distance may be different for each pair of subscans. As typically a dentist will hold the scanner and move it across the teeth of the patient, the overlapping distance depends on how fast the dentist moves the scanner and the time frame between the acquisition of each scan, so if the time frame is constant, and the dentist does not move the scanner exactly with a constant speed, the overlapping distance will not be the same for all subscans.

(48) FIG. 8 shows an example of excluded volume.

(49) The excluded volume 821 is the volume in space where no surface can be present. At least a part of the excluded volume 821 may correspond to the scan volume 811 of a 3D representation, since the space between the scan head 807 or the focusing optics of the 3D scanner and the captured surface 816 must be an empty space, unless a transparent object, which is not detectable by the 3D scanner, was located in the scan volume. Furthermore the volume of the scan head 807 and the 3D scanner 801 may be defined as an excluded volume 823, since the scanner and scan head occupies their own volume in space, whereby no surface can be present there. Furthermore, the tooth 808 which is being scanned also occupies a volume in space, but since the surface 816 of the tooth 808 is being captured by the scanner, it is not considered what is behind the surface 816.

(50) FIG. 9 shows an example of scanning a tooth and acquiring a first and a second representation of the surface of the tooth, where no movable object is present.

(51) FIG. 9a) shows an example of scanning the tooth 908 using a 3D scanner 901 for acquiring a first 3D representation 916 of the surface of the tooth 908. A first scan volume 911 in space is related to the first representation, and a first excluded volume 921 corresponds to the first scan volume 911.

(52) FIG. 9b) shows an example of scanning the tooth 908 using a 3D scanner 901 for acquiring a second 3D representation 917 of the surface of the tooth 908. A second scan volume 912 in space is related to the second representation, and a second excluded volume 922 corresponds to the second scan volume 912. The second representation is acquired with a different angle between scanner and tooth than the first representation.

(53) No surface portion of the first representation 916 lies in the second excluded volume 922, and no surface portion of the second representation 917 lies in the first excluded volume 921, so no surface portion(s) are disregarded in the generation of the virtual 3D model in this case.

(54) FIG. 10 shows an example of scanning a tooth and acquiring a first and a second representation of the surface of the tooth, where a movable object is captured in part of the first representation.

(55) FIG. 10a) shows an example of scanning the tooth 1008 using a 3D scanner 1001 for acquiring a first 3D representation 1016 of the surface of the tooth 1008. A movable object 1030 is present, and a part 1016b of the first representation 1016 comprises the surface of the movable object 1030. The part 1016a of the first representation 1016 comprises the surface of the tooth. A first scan volume 1011 in space is related to the first representation, and a first excluded volume 1021 corresponds to the first scan volume 1011.

(56) FIG. 10b) shows an example of scanning the tooth 1008 using a 3D scanner 1001 for acquiring a second 3D representation 1017 of the surface of the tooth 1008. A second scan volume 1012 in space is related to the second representation, and a second excluded volume 1022 corresponds to the second scan volume 1012. The second representation is acquired with a different angle between scanner and tooth than the first representation.

(57) Since the surface portion 1016b of the first representation 1016 lies in the second excluded volume 1022, this surface portion 1016b is disregarded in the generation of the virtual 3D model.

(58) FIG. 11 shows an example of scanning a tooth and acquiring a first and a second representation of the surface of the tooth, where a movable object is captured in the second representation.

(59) FIG. 11a) shows an example of scanning the tooth 1108 using a 3D scanner 1101 for acquiring a first 3D representation 1116 of the surface of the tooth 1108. A first scan volume 1111 in space is related to the first representation, and a first excluded volume 1121 corresponds to the first scan volume 1111.

(60) FIG. 11b) shows an example of scanning the tooth 1108 using a 3D scanner 1101 for acquiring a second 3D representation 1117 of the surface of the tooth 1108. A movable object 1130 is present, and the second representation 1117 comprises the surface of the movable object 1130. A second scan volume 1112 in space is related to the second representation, and a second excluded volume 1122 corresponds to the second scan volume 1112. The second representation is acquired with a different angle between scanner and tooth than the first representation.

(61) Since the surface of the second representation 1117 lies in the first excluded volume 1121, the surface of the second representation 1117 is disregarded in the generation of the virtual 3D model.

(62) The figures in FIG. 11 are shown in 2D, but it is understood that the figures represent 3D figures.

(63) FIG. 12 shows an example of acquiring a first and a second representation of the surface of an object, e.g. a tooth, where a movable object is captured in the first representation.

(64) FIG. 12a) shows a first 3D representation 1216 comprising two parts, part 1216a and part 1216b. The first scan volume 1211 is indicated by the vertical lines. The first excluded volume 1221 corresponds to the first scan volume.

(65) FIG. 12b) shows a second 3D representation 1217. The second scan volume 1212 is indicated by the vertical lines. The second excluded volume 1222 corresponds to the second scan volume. The part 1216a of the first representation 1216 corresponds to the first part of the second representation 1217, whereas the part 1216b of the second representation 1216 does not correspond to the second part of the second representation 1217.

(66) The part 1216b of the first representation 1216 lies in the second excluded volume 1222, and the part 1216b is therefore disregarded in the generation of the virtual 3D model.

(67) FIG. 12c) shows the resulting 3D representation 1219, which corresponds to the second representation.

(68) The figures in FIG. 12 are shown in 2D, but it is understood that the figures represent 3D figures.

(69) FIG. 13 shows an example of acquiring a first and a second representation of a surface of an object, where no movable object is present.

(70) FIG. 13a) shows an example of acquiring a first 3D representation 1316 of a surface of an object (not shown). A first scan volume 1311 in space is related to the first representation. The first scan volume 1311 is indicated by dotted vertical lines. A first excluded volume 1321 corresponds to the first scan volume 1311.

(71) FIG. 13b) shows an example of acquiring a second 3D representation 1317 of a surface of an object (not shown). A second scan volume 1312 in space is related to the second representation. The second scan volume 1312 is indicated by dotted vertical lines. A second excluded volume 1322 corresponds to the second scan volume 1312.

(72) The second representation is acquired with a different angle between scanner and tooth than the first representation. Furthermore, the second representation is displaced in space relative to the first representation, so the first and second representation does not represent the same entire surface part of the object, but parts of the representations are overlapping.

(73) FIG. 13c) shows an example where the first representation 1316 and the second representation 1317 are aligned/registered, such that the corresponding parts of the representations are arranged in the same location.

(74) FIG. 13d) shows an example where the overlapping common scan volume 1340 of the first representation 1316 and the second representation 1317 is indicated as a shaded area. If a surface portion of one of the representations is located in the overlapping common scan volume 1340, then this corresponds to that the surface portion is located in the excluded volume of the other representation. However, in this case, no surface portion of the first representation 1316 or of the second representation 1317 lies in the overlapping common scan volume 1340, so no surface portion(s) are disregarded in the generation of the virtual 3D model in this case.

(75) In order to be able to distinguish between the surface of the first and the surface of the second representation, these two surfaces are slightly displaced, but in a real case the surface of the first and the surface of the second representation may be exactly overlapping each other, so that the surface part from the first representation and the surface part from the second representation cannot be distinguished.

(76) FIG. 13e) shows an example of the resulting virtual 3D surface 1319.

(77) The figures in FIG. 13 are shown in 2D, but it is understood that the figures represent 3D figures.

(78) FIG. 14 shows an example of acquiring a first and a second representation of a surface of an object, where a movable object of the second representation is present in the excluded volume of the first representation.

(79) FIG. 14a) shows an example of acquiring a first 3D representation 1416 of a surface of an object (not shown). A first scan volume 1411 in space is related to the first representation. The first scan volume 1411 is indicated by dotted vertical lines. A first excluded volume 1421 corresponds to the first scan volume 1411.

(80) FIG. 14b) shows an example of acquiring a second 3D representation 1417 of a surface of an object (not shown). A second scan volume 1412 in space is related to the second representation. The second scan volume 1412 is indicated by dotted vertical lines. A second excluded volume 1422 corresponds to the second scan volume 1412. The second 3D representation 1417 comprises two parts 1417a and 1417b. The part 1417b is located between the part 1417a and the scanner (not shown), which is arranged somewhere at the end of the scan volume.

(81) The second representation is acquired with a different angle between scanner and tooth than the first representation. Furthermore, the second representation is displaced in space relative to the first representation, so the first and second representation does not represent the same entire surface part of the object, but parts of the representations are overlapping.

(82) FIG. 14c) shows an example where the first representation 1416 and the second representation 1417 are aligned/registered, such that the corresponding parts of the representations are arranged in the same location. Some of the part 1417a of the second representation is aligned/registered with the first representation. The part 1417b cannot be aligned/registered with the first representation 1416, since there is no corresponding surface portions between the surface 1416 and the surface 1417b.

(83) FIG. 14d) shows an example where the overlapping common scan volume 1440 of the first representation 1416 and the second representation 1417 is indicated as a shaded area. The surface portion 1417b of the second representation is located in the overlapping common scan volume 1440, and the surface portion 1417b of the second representation 1417 is therefore located in the excluded volume 1421 of the first representation 1416, and part 1417b must therefore be a movable object, which is only present in the second representation.

(84) In order to be able to distinguish between the surface of the first and the surface of the second representation, these two surfaces are slightly displaced, but in a real case the surface of the first and the surface of the second representation may be exactly overlapping each other, so that the surface part from the first representation and the surface part from the second representation cannot be distinguished.

(85) FIG. 14e) shows an example of the resulting virtual 3D surface 1419, where the surface portion 1417b is disregarded in the generation of the virtual 3D model, so the virtual 3D model comprises the first representation 1416 and the part 1417a of the second representation 1417.

(86) The figures in FIG. 14 are shown in 2D, but it is understood that the figures represent 3D figures.

(87) FIG. 15 shows an example of acquiring a first and a second representation of a surface of an object, where a possible movable object is present in the second representation, but not in the excluded volume of the first representation.

(88) FIG. 15a) shows an example of acquiring a first 3D representation 1516 of a surface of an object (not shown). A first scan volume 1511 in space is related to the first representation. The first scan volume 1511 is indicated by dotted vertical lines. A first excluded volume 1521 corresponds to the first scan volume 1511.

(89) FIG. 15b) shows an example of acquiring a second 3D representation 1517 of a surface of an object (not shown). A second scan volume 1512 in space is related to the second representation. The second scan volume 1512 is indicated by dotted vertical lines. A second excluded volume 1522 corresponds to the second scan volume 1512. The second 3D representation 1517 comprises two parts 1517a and 1517b. The part 1517b is located between the part 1517a and the scanner (not shown), which is arranged somewhere at the end of the scan volume.

(90) The second representation 1517 is acquired with a different angle between scanner and tooth than the first representation 1516. Furthermore, the second representation is displaced in space relative to the first representation, so the first and second representation does not represent the same entire surface part of the object, but parts of the representations are overlapping.

(91) FIG. 15c) shows an example where the first representation 1516 and the second representation 1517 are aligned/registered, such that the corresponding parts of the representations are arranged in the same location. Some of the part 1517a of the second representation is aligned/registered with the first representation 1516. The part 1517b cannot be aligned/registered with the first representation 1516, since there is no corresponding surface portions between the surface 1516 and the surface 1517b.

(92) FIG. 15d) shows an example where the overlapping common scan volume 1540 of the first representation 1516 and the second representation 1517 is indicated as a shaded area. The surface portion 1517b of the second representation is not located in the overlapping common scan volume 1540, and the surface portion 1517b of the second representation 1517 is therefore not located in the excluded volume 1521 of the first representation 1516.

(93) In order to be able to distinguish between the surface of the first and the surface of the second representation, these two surfaces are slightly displaced, but in a real case the surface of the first and the surface of the second representation may be exactly overlapping each other, so that the surface part from the first representation and the surface part from the second representation cannot be distinguished.

(94) FIG. 15e) shows an example of the resulting virtual 3D surface 1519, where the surface portion 1517b is not disregarded in the generation of the virtual 3D model, so the virtual 3D model comprises the first representation 1516 and both parts, 1517a and 1517b, of the second representation 1517.

(95) Even though the surface portion 1517b probably is the representation of a movable object, at least this would be assumed if the object in this case is a tooth, since a tooth is unlikely to have a protrusion like the part 1517b of the representation shows, the surface portion 1517b cannot be disregarded yet, because the surface portion 1517b is not found to be located in any excluded volume from any representation yet. But when the scanning of the object's surface continues, there will probably be acquired a third representation which has an overlapping common scan volume with the second representation, and if the surface portion 1517b is located in the excluded volume of the third representation, then the surface portion 1517b can be disregarded from the virtual 3D model.

(96) The figures in FIG. 15 are shown in 2D, but it is understood that the figures represent 3D figures.

(97) FIG. 16 shows an example of a threshold distance defining how far from the representation or captured surface possible movable objects are disregarded in the generation of the virtual 3D model.

(98) A near threshold distance 1650 is defined, which determines a distance from the captured surface 1616 in a first representation, where a surface portion in the second representation (not shown) which is located within the near threshold distance 1650 from the captured surface 1616 and which is located in space in the first excluded volume 1611 is not disregarded in the generation of the virtual 3D model.

(99) The near threshold distance is defined for avoiding that too much of a representation of a surface is incorrectly disregarded, since there may be noise in the representation and since the registration/alignment between representations or sub-scans may not be completely accurate. Reference numeral 1607 is the scan head of the scanner 1601, and reference numeral 1608 is the volume of the tooth.

(100) The FIG. 20 is shown in 2D, but it is understood that the figure represents 3D figures.

(101) FIG. 17 shows an example of how the excluded volume is determined.

(102) The space may be quantized in a 3D volume grid 1760. The distance 1762 between the corners 1761 in the 3D grid 1760 may be equidistant. The single cells 1763 in the grid each comprises eight corners 1761, and when each of the eight corners 1761 has been covered by a representation, then this cell 1763 is marked as seen. Thus if all eight corners 1761 of a cell 1763 is in the scan volume of a representation, then this cell 1763 may be marked as excluded volume. There may be such as ten, hundred, thousands or millions of cells in the space of a representation.

(103) FIG. 18 shows examples of how movable objects can look in subscans.

(104) FIG. 18a) shows a subscan where the tip of a finger 1870 has been captured in the subscan.

(105) FIG. 18b) shows an example where a dental instrument 1871 has been captured in the subscan.

(106) FIG. 19 shows an example of a pinhole scanner.

(107) The pinhole scanner 1980 comprises a camera 1982 and a light source 1981, e.g. comprising a pattern (not shown). The light source 1981 transmits light rays 1983 to the surface 1916 from a small aperture, i.e. all the light rays 1983 transmitted to the surface 1961 are transmitted from a point. Light rays 1984 are reflected back from the surface 1961 and received by the camera 1982 through a small aperture.

(108) Due to the pinhole setup, the point of light transmitted to the surface from the light source is well defined and the point of received light from the surface is also well defined.

(109) Thus the excluded volume for a representation of the surface is defined by the volume in space that the light rays 1983 and 1984 span, and this volume is well defined due to the pinhole setup.

(110) FIG. 20 shows examples of the principle of a far threshold distance from the captured surface defining a volume which is not included in the excluded volume of a representation.

(111) The light rays 2052 (shown in dotted lines) from the scan head 2007 of the scanner 2001 may spread or scatter or disperse in any directions as seen in FIG. 20a), where a number of the light rays are illustrated. It is understood that only some of all the light rays are shown here. The surface area on the tooth surface where the light rays impinge has reference numeral 2016.

(112) In FIG. 20b) it is shown that even if an object 2072, such as a movable object, is arranged between the scan head 2007 and the surface 2016 of a tooth, the scanner 2001 may still capture a surface point 2053 on the tooth surface 2016 which is present or hidden under the object 2072, because of the angled or inclined light rays 2052. A surface point 2053 needs just be visible for one light ray from the scanner in order for that surface point to be detected.

(113) FIG. 20c) shows an example of the far threshold distance 2051, which determines a distance from the captured surface 2016 in a representation, where any acquired data or surface or surface points, which is/are present or located outside the far threshold distance 2051, is not included in the excluded volume for the representation. Thus any acquired data or surface or surface points in the volume 2054 between the far threshold distance 2051 and the scan head 2007 is not used in defining the excluded volume of the representation.

(114) FIG. 20d) shows an example where defining the far threshold distance is an advantage for avoiding that real tooth surface parts are erroneously disregarded.

(115) The scanner 2001 should in principle capture all surface parts, 2016 and 2017, present in the scan volume, but in some cases the scanner cannot capture all surface parts in the scan volume. This may happen for example because the surface part is present outside the focus region of the scanner 2001 or of the scan head 2007 or because of poor lightning conditions for the surface part. In such cases the surface part 2017 may not be captured and registered, and an excluded volume would be determined in the space region where the surface part 2017 of the tooth surface is actually present. By defining the far threshold distance 2051 less of the scan volume is excluded, and thereby it can be avoided that a real surface part 2017 is erroneously disregarded.

(116) The actual distance of the threshold may depend or be calculated based on the optics of the scanner. The far threshold distance may be a fixed number, such as about 0.5 mm, 1 mm, 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, 8 mm, 9 mm, 10 mm, 20 mm, 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 90 mm, or 100 mm. Alternatively, the far threshold distance may be a percentage or a fraction of the length of the scan volume, such as about 20%, 25%, 30%, 35%, 40%, 45%, or 50% of the length of the scan volume, or such as , , , of the length of the scan volume.

(117) The far threshold distance may be based on a determination of how far a distance from a detected point of the surface it is possible to scan, i.e. how much of the surface around a detected point that is visible for the scanner. If the visible distance in one direction from a surface point is short, then the far threshold distance will be smaller than if the distance in all directions from a surface point is long.

(118) The figures in FIG. 20 are shown in 2D, but it is understood that the figures represent 3D figures.

(119) Although some embodiments have been described and shown in detail, the invention is not restricted to them, but may also be embodied in other ways within the scope of the subject matter defined in the following claims. In particular, it is to be understood that other embodiments may be utilised and structural and functional modifications may be made without departing from the scope of the present invention.

(120) In device claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or described in different embodiments does not indicate that a combination of these measures cannot be used to advantage.

(121) A claim may refer to any of the preceding claims, and any is understood to mean any one or more of the preceding claims.

(122) It should be emphasized that the term comprises/comprising when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.

(123) The features of the method described above and in the following may be implemented in software and carried out on a data processing system or other processing means caused by the execution of computer-executable instructions. The instructions may be program code means loaded in a memory, such as a RAM, from a storage medium or from another computer via a computer network. Alternatively, the described features may be implemented by hardwired circuitry instead of software or in combination with software.