METHOD AND TOOL FOR CALIBRATING A PASSIVE POSITIONING SYSTEM
20240401943 ยท 2024-12-05
Inventors
- Michel CARDOSO (GIF-SUR-YVETTE, FR)
- Florence GRASSIN (GIF-SUR-YVETTE, FR)
- Vincent SAINT MARTIN (GIF-SUR-YVETTE, FR)
Cpc classification
International classification
G01B11/00
PHYSICS
G01N29/22
PHYSICS
Abstract
A method is provided for calibrating a device for non-destructive inspection of a mechanical part. The device includes an optical movement tracking system to which a reference reference frame is linked, a sensor-holder, a first rigid body, a non-destructive inspection sensor secured to the sensor-holder fixedly linked to the first rigid body, and a computer. The method includes: determination, from three points on a top flat surface of a calibration block, of the length and the width of the calibration block; determination of a first transformation matrix from the reference reference frame to a reference frame of the block linked to the calibration block; disposition of the sensor on the top flat surface of the calibration block; determination, from three points on the first rigid body, of the length and the width of the sensor; and determination of a second transformation matrix making it possible to switch from a reference frame linked to the sensor-holder to a reference frame linked to the sensor.
Claims
1. A method for calibrating a device for the non-destructive inspection of a mechanical part, the device comprising: an optical movement tracking system to which a reference reference frame (R.sub.0) is linked, a sensor-holder, a first rigid body, a non-destructive inspection sensor secured to the sensor-holder fixedly linked to the first rigid body, and a computer, the method comprising the following steps executed by the computer: determination, from the acquisition of the position of three points on a top flat surface of a calibration block by the optical movement tracking system, the length and the width of the calibration block; determination of a first transformation matrix from the reference reference frame (R.sub.0) to a reference frame of the block (R.sub.B) linked to the calibration block from the determined length and width of the calibration block; determination, from the acquisition of the position of three points on the first rigid body by the optical movement tracking system, of the length and the width of the sensor when the sensor is disposed in three distinct positions (P.sub.1, P.sub.2, P.sub.3) on the top flat surface of the calibration block; determination of a second transformation matrix making it possible to switch from a reference frame (R.sub.H) linked to the sensor-holder to a reference frame (R.sub.S) linked to the sensor from the determined length and width of the sensor.
2. The calibration method as claimed in claim 1, further comprising a step of determination of a third transformation matrix making it possible to switch from the reference reference frame (R.sub.0) to the reference frame (R.sub.S) linked to the sensor.
3. The calibration method as claimed in claim 1, comprising, prior to the step of determination of the first transformation matrix of the reference reference frame (R.sub.0) to a reference frame of the block (R.sub.B) linked to the calibration block, a step of determination of the reference frame of the block (R.sub.B) linked to the calibration block in the reference reference frame (R.sub.0).
4. The calibration method as claimed in claim 1, comprising, prior to the step of determination of the second transformation matrix making it possible to switch from a reference frame (R.sub.H) linked to the sensor-holder to a reference frame (R.sub.S) linked to the sensor, a step of determination of the reference frame (R.sub.S) linked to the sensor in the reference frame (R.sub.H) linked to the sensor-holder.
5. The calibration method as claimed in claim 4, comprising, prior to the step of determination of the reference frame (R.sub.S) linked to the sensor in the reference frame (R.sub.H) linked to the sensor-holder, a step of determination of a fourth transformation matrix making it possible to switch from the reference frame (R.sub.H) linked to the sensor-holder to the reference reference frame (R.sub.0).
6. A device for the non-destructive inspection of a mechanical part, comprising: an optical movement tracking system to which a reference reference frame (R.sub.0) is linked, a sensor-holder, a first rigid body, a non-destructive inspection sensor secured to the sensor-holder fixedly linked to the first rigid body, and a computer, the non-destructive inspection device further comprising a calibration block; and wherein the computer is configured to: determine, from the acquisition of the position of three points on a top flat surface of the calibration block by the optical movement tracking system, the length and the width of the calibration block; determine a first transformation matrix from a reference reference frame (R.sub.0) to a reference frame of the block (R.sub.B) linked to the calibration block from the determined length and width of the calibration block; determine, from the acquisition of the position of three points on the first rigid body by the optical movement tracking system, the length and the width of the sensor when the sensor is disposed in three distinct positions (P.sub.1, P.sub.2, P.sub.3) on the top flat surface of the calibration block; determine a second transformation matrix making it possible to switch from a reference frame (R.sub.H) linked to the sensor-holder to a reference frame (R.sub.S) linked to the sensor from the determined length and width of the sensor.
7. The inspection device as claimed in claim 6, wherein the computer is further configured to determine a third transformation matrix making it possible to switch from the reference reference frame (R.sub.0) to the reference frame (R.sub.S) linked to the sensor.
8. The inspection device as claimed in claim 6, further comprising a pointing device comprising a tip and fixedly linked to a second rigid body, the pointing device being capable of determining the position of points on a surface.
9. A computer program comprising instructions which cause the device as recited in claim 1, which further comprises a calibration block, to execute the steps of the method as recited in claim 1.
10. A computer-readable storage medium on which is stored the computer program as claimed in claim 9.
11. A method for real-time visualization of a signal of non-destructive inspection of a mechanical part, the signal being emitted by a non-destructive inspection device comprising: an optical movement tracking system to which a reference reference frame (R.sub.0) is linked, a sensor-holder, a first rigid body, a non-destructive inspection sensor secured to the sensor-holder fixedly linked to the first rigid body, a computer, and an augmented reality visualization device facing the mechanical part, to which an augmented reality reference frame (R.sub.A) is linked, the visualization method comprising the steps of the calibration method as claimed in claim 1 and further the following steps: displacement of the non-destructive inspection sensor over a zone of examination of the mechanical part; simultaneously with the step of displacement of the non-destructive inspection sensor, emission from a point of emission along an axis of emission and reception of the signal by the sensor; determination of an occlusion inside the mechanical part, the occlusion being centered around the point of emission; determination of a surface of intersection of a plane containing the axis of emission in the occlusion; and, visualization, on the augmented reality visualization device; of a real view of the mechanical part, of the sensor-holder and of the non-destructive inspection sensor, of a holographic 3D representation of the mechanical part, of the sensor-holder and of the non-destructive inspection sensor, superimposed on the real view, and of a holographic 3D representation of the occlusion and of the point of intersection, superimposed on the real view.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0060] The invention will be better understood and other advantages will become apparent on reading the detailed description of an embodiment given by way of example, the description being illustrated by the attached drawing in which:
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
DETAILED DESCRIPTION
[0075] In the interests of clarity, the same elements will bear the same references in the various figures. For better visibility and in the interests of improved understanding, the elements are not always represented to scale.
[0076] The object of the invention lies in the fact of determining, simply and rapidly, the transformation matrix between the sensor-holder and the sensor, in order to calibrate the non-destructive inspection device (by making the mathematical link between the movement tracking device and the point of emission of the sensor).
[0077]
[0078] The optical movement tracking system 1 is associated with an orthonormal reference frame R.sub.0=(0, ,
,
) in which O is the origin of the reference frame and
,
,
are mutually orthogonal normed vectors.
[0079] The optical movement tracking system 1 determines the cartesian coordinates and the orientation of a rigid body in the orthonormal reference frame of the optical movement tracking system 1.
[0080] The optical movement tracking system 1 comprises at least two cameras and one or more infrared emitters. Other types of optical system can be used in the context of the invention, for example an optical system based on laser and/or with no-volume markers of pad type. It is important to specify that the system 1 can be a passive (optical or non-optical) movement tracking system.
[0081] The non-destructive inspection device 10 comprises a first rigid body 2 linked to a probe, or sensor, 3 and a sensor-holder 8. The sensor 3 is secured to the sensor-holder 8 fixedly linked to the first rigid body 2. The first rigid body 2, the sensor-holder 8 and the sensor 3 are fixedly linked and form an indivisible assembly during the non-destructive inspection. The first rigid body 2 comprises at least three spherical targets that are reflecting in the infrareds situated at distinct positions. The first rigid body 2 is associated with an orthonormal reference frame R.sub.c=(C, ,
,
) in which C is the origin of the reference frame and
,
,
are mutually orthogonal normed vectors.
[0082] In a preferred embodiment, the first rigid body 2 comprises six spherical targets. The sensor 3 is for example a single-element ultrasound probe. It comprises an emitting and receiving surface, called active surface, 31. The active surface 31 is a rectangle of flat surface. In a variant, the sensor 3 is of another type, for example an eddy current probe. Generally, an active surface is any surface emitting or receiving physical signals belonging to a non-destructive inspection sensor. For example, in the case of a contact single-element ultrasound sensor, that corresponds to the surface of the piezoelectric. In the case of a single-element ultrasound sensor with a Plexiglass shoe, that corresponds to the surface of the shoe through which the ultrasound signals are emitted.
[0083] The inspection device 10 comprises a calibration block and a computer 6. Furthermore, the computer is configured to perform the following steps which will be explained hereinbelow: [0084] determining, from three points on a top flat surface of the calibration block, the length and the width of the calibration block; [0085] determining a first transformation matrix from the reference reference frame (R.sub.0) to a reference frame of the block (R.sub.a) linked to the calibration block; [0086] determining, from three points on the first rigid body (2), the length and the width of the sensor when it is disposed on the top flat surface of the calibration block; [0087] determining a second transformation matrix making it possible to switch from a reference frame (R.sub.H) linked to the sensor-holder to a reference frame (R.sub.S) linked to the sensor.
[0088] The computer is further configured to determine a third transformation matrix making it possible to switch from the reference reference frame (R.sub.0) to the reference frame (R.sub.S) linked to the sensor.
[0089] The invention is described for the tracking of a sensor, but it applies equally to the tracking of several sensors, simultaneously and independently.
[0090] The non-destructive inspection device comprises a computer 6 linked to the optical movement tracking system 1 and to a control module 5. The computer 6 is for example a computer or an electronic circuit board. It notably comprises a processor running a computer program implementing the method which will be described and a memory to store the results thereof. It also comprises input and output interfaces and can be associated with a visualization screen.
[0091] The link between the computer 6 and the optical movement tracking system 1 can be wired or wireless. Similarly, the link between the computer 6 and the control module 5 can be wired or wireless.
[0092]
[0093] Advantageously, the non-destructive inspection device comprises a pointing device 4.
[0094]
[0095] In a preferred embodiment, the second rigid body 41 comprises seven spherical targets.
[0096] The non-destructive inspection device comprises a control module 5 provided with at least one actuation button 51. Preferably, the control module 5 is mounted on the pointing device 4 to facilitate the use thereof.
[0097]
[0098]
[0110] To express a vector in the reference frame R.sub.A, the notation
is used. It should be noted that a vector serves equally to express a position and a displacement.
[0111] The transformation (composed of a translation and/or of a rotation) making it possible to switch from a reference frame R.sub.A to a reference frame R.sub.B is defined by the transformation matrix .sup.AT.sub.B.
[0112] This transformation matrix of 44 dimension is composed as follows:
[0113] ,
and
respectively designate the unitary vectors following the axes x.sub.B, y.sub.B and z.sub.B of the reference frame R.sub.B and expressed in the reference frame R.sub.A.
[0114] is the vector expressing the origin of the reference frame R.sub.B in the reference frame R.sub.A.
[0115] The movement tracking system 1 is capable of locating in the space the rigid bodies 2, 41. A rigid body is a non-deformable collection of spherical markers reflecting the infrared rays and can thus be located by the optical positioning system. After calibration, the movement tracking system can associate with a rigid body an origin reference frame which serves to qualify the position and the orientation of rigid bodies in the reference frame of the movement tracking system.
[0116] It should be noted that, in order to be correctly located, the rigid bodies must be situated within the solid angle seen by the movement tracking system.
[0117] Notably, the movement tracking system is supplied with a factory-calibrated tool that makes it possible to obtain the coordinates of a point of the space within the reference frame of said movement tracking system. Hereinbelow, this tool is called second rigid body 41.
[0118] Two different rigid bodies are used in the calibration method of the invention: the second rigid body 41 previously described and the first rigid body 2 used to locate the sensor-holder 8. It is this latter first rigid body 2 which forms the object of the calibration procedure described in the present invention. The aim of this calibration method is to be able to determine rapidly, simply and accurately the geometrical transformation (rotation and/or translation) that exists between the origin of the first rigid body 2 as located by the movement tracking system and the point of emission of the ultrasounds by the sensor 3, independently of the sensor-holder used. In other words, the calibration method of the invention makes it possible to determine the geometrical transformation, in transformation matrix form, between the origin of the first rigid body 2 and a point of the sensor regardless of the form of the sensor-holder necessarily disposed between these two elements.
[0119] The inspection device 10 according to the invention comprises [0120] an optical movement tracking system 1 to which a reference frame (R0) is linked, [0121] a sensor-holder 8, [0122] a first rigid body 2, [0123] a non-destructive inspection sensor 3 secured to the sensor-holder 8 fixedly linked to the first rigid body 2.
[0124] The method for calibrating a device for non-destructive inspection of a mechanical part 7 comprises the following steps: [0125] determination (100), from three points on a top flat surface of a calibration block 14, of the length and the width of the calibration block; [0126] determination (110) of a first transformation matrix from the reference reference frame (R.sub.0) to a reference frame of the block (R.sub.B) linked to the calibration block; [0127] disposition (115) of the sensor 3 on the top flat surface of the calibration block 14; [0128] determination (120), from three points on the first rigid body (2), of the length and the width of the sensor 3; [0129] determination (130) of a second transformation matrix making it possible to switch from a reference frame (R.sub.H) linked to the sensor-holder to a reference frame (R.sub.S) linked to the sensor.
[0130] Advantageously, the calibration method according to the invention further comprises a step 140 of determination of a third transformation matrix making it possible to switch from the reference reference frame (R.sub.0) to the reference frame (R.sub.S) linked to the sensor. The third transformation matrix is obtained by multiplication of the second transformation matrix with the first transformation matrix.
[0131] The aim of the steps 100 and 110 is to calibrate the calibration block 14 itself, that is to say determine its dimensions (length and width) and associate with it a reference frame R.sub.B linked to the calibration block. The movement tracking system and the calibration block must be firmly fixed in order to avoid any modification of their relative positions and orientations. Then, an operator uses the movement tracking system and the second rigid body 41 in order to acquire three positions on the surface of the calibration block 14. These three positions Q.sub.1, Q.sub.2 and Q.sub.3 are previously marked, advantageously but not mandatorily by small holes created for this purpose on the flat top surface of the calibration block 14, as indicated in
[0132]
[0133] The calibration method according to the invention can comprise, prior to the step 110 of determination of the first transformation matrix from the reference reference frame R.sub.0 to a reference frame of the block R.sub.B linked to the calibration block, a step 105 of determination of the reference frame of the block R.sub.B linked to the calibration block in the reference reference frame R.sub.0.
[0134] More specifically, the computer 6 receives as input three vectors: [0135] Vector defining the first position in the reference reference frame R.sub.0: [0136] Vector defining the second position in the reference reference frame R.sub.0:
[0137] Vector defining the third position in the reference reference frame R.sub.0:
[0138] The computer 6 calculates the following data: [0139] Length of the block: L.sub.B [0140] Width of the block: W.sub.B [0141] Transformation matrix making it possible to switch from the reference reference frame R.sub.0 to the block reference frame R.sub.B: .sup.OT.sub.B
[0142] The calculation steps are detailed hereinbelow.
[0143] The vector defining the center of the calibration block in the reference reference frame R.sub.0 is calculated:
[0144] The vector defining the length of the calibration block in the world reference frame R.sub.0 is calculated:
[0145] The vector defining the width of the calibration block in the world reference frame R.sub.0 is calculated:
[0146] The length of the calibration block can thus be calculated:
[0147] The width of the calibration block can also be calculated:
[0148] From these data, it is possible to calculate the three unitary vectors of the block reference frame R.sub.B in the reference reference frame R.sub.0:
[0149] Finally, the transformation matrix making it possible to switch from the reference reference frame R.sub.0 to the block reference frame R.sub.B is calculated:
[0150] The aim of the steps 120 and 130 is to determine the dimensions of the sensor (length and width), and the transformation matrix between the first rigid body situated on the sensor-holder and the center of the sensor, as indicated in
[0151]
[0152] For the steps 120 and 130 to be carried out, the sensor 3 is disposed (step 115) on the top flat surface of the calibration block 14. The operator acquires three positions of the first rigid block present on the sensor-holder, as visible in
[0153] Note that the sensor-holder is designed in such a way that it is the sensor itself and not the sensor-holder which comes into contact with the borders (blockers) of the calibration block when the sensor-holder is brought to the positions P.sub.1, P.sub.2 and P.sub.3.
[0154] In addition to the data previously available, the computer 6 receives as input three vectors: [0155] Vector defining the first position in the reference reference frame R.sub.0: [0156] Vector defining the second position in the reference reference frame R.sub.0:
[0157] Vector defining the third position in the reference reference frame R.sub.0:
[0158] The computer 6 calculates the following data: [0159] Length of the sensor: L.sub.S [0160] Width of the sensor: W.sub.S
[0161] Transformation matrix making it possible to switch from the sensor-holder reference frame R.sub.H to the sensor reference frame R.sub.S: .sup.HT.sub.S
[0162] The calculation steps are detailed hereinbelow.
[0163] The following vectors are calculated:
[0164] The length of the sensor is calculated:
[0165] The width of the sensor is calculated:
[0166] Finally, the three unitary vectors of the sensor reference frame R.sub.S in the reference reference frame R.sub.0 are calculated:
[0167] It is then possible to calculate the transformation matrix making it possible to switch from the sensor-holder reference frame R.sub.H to the reference reference frame R.sub.0, the sensor-holder being at P.sub.3:
[0168] In other words, the calibration method according to the invention comprises, prior to the step 125 of determination of the reference frame R.sub.S linked to the sensor in the reference frame R.sub.H linked to the sensor-holder, a step 123 of determination of a fourth transformation matrix making it possible to switch from the reference frame R.sub.H linked to the sensor-holder to the reference reference frame R.sub.0.
[0169] The next step consists in calculating the coordinates of the center of the sensor in the reference reference frame R.sub.0 (the calculation is made for the sensor-holder at P.sub.3):
[0170] The three unitary vectors of the sensor reference frame R.sub.S in the sensor-holder reference frame R.sub.H should also be calculated:
[0171] Finally, the center of the sensor reference frame R.sub.S in the sensor-holder reference frame R.sub.H is calculated:
[0172] In other words, the calibration method comprises, prior to the step 130 of determination of the second transformation matrix making it possible to switch from a reference frame R.sub.H linked to the sensor-holder to a reference frame R.sub.S linked to the sensor, a step 125 of determination of the reference frame R.sub.S linked to the sensor in the reference frame R.sub.H linked to the sensor-holder.
[0173] To finish, it is possible to calculate the transformation matrix making it possible to switch from the sensor-holder reference frame R.sub.H to the sensor reference frame R.sub.S:
[0174] The invention defines a mechanical device composed of a calibration block (in plate form) intended to be calibrated by an optical (movement tracking) positioning system and which makes it possible, in conjunction with mathematical calculations, to calibrate a sensor-holder simply, rapidly and accurately. The calibration of the sensor-holder can be performed directly by using a tool that makes it possible to record the positions of different characteristic points. The use of the calibrated calibration block naturally makes it possible to incorporate a constraint of coplanarity which increases the accuracy of the calibration. It is the combination of the calibration of the calibration block and of the sensor which gives this method a particularly advantageous aspect. The result thereof is great simplicity and rapidity of implementation as well as a significant improvement in the accuracy of the calibration compared to the conventional method.
[0175] The invention relates also to a method for real-time visualization of a signal of non-destructive inspection of a mechanical part, which can optionally be coupled to the calibration method.
[0176] For the implementation of the visualization method, the inspection device 10 can comprise an augmented reality visualization device 16 facing the mechanical part 7, to which an augmented reality reference frame (R.sub.A) is linked. It is through this visualization device 16 that the operator sees the real mechanical part, and it is on this visualization device 16 that the holographic 3D representations (detailed later) are displayed superimposed on the view of the mechanical part on the visualization device 16.
[0177] The computer 6 can then be configured to [0178] a. determine a cut of the occlusion of the mechanical part 7, the cut being centered around the point of emission; [0179] b. determine a surface of projection (or visualization) of the signals constructed from the ultrasound paths of the ultrasound signals delivered by the sensor 3. This surface corresponds to the intersection of the plane containing the axis of emission in the occlusion.
[0180] The term occlusion means concealment of the virtual objects behind real things. The occlusion occurs when an object in a 3D space blocks the view of another object. In augmented reality, the objects generated by computer are placed in a real scene to provide additional information or modify the nature of the real objects. Thus, the virtual objects and the real scene must be perfectly aligned in order to maintain high levels of realism and allow the objects to behave as if they were in normal conditions.
[0181] The computer 6 notably makes it possible to assemble the signals received by the sensor to form an image of the interior of the part.
[0182] And the augmented reality visualization device 16 is configured to display: [0183] a. a real view of the mechanical part 7, of the sensor-holder 8 and of the non-destructive inspection sensor 3, [0184] b. a holographic representation in the forms of occlusions of the mechanical part 7, of the sensor-holder 8 and of the non-destructive inspection sensor 3, superimposed on the real view, [0185] c. a holographic representation of the cut of the occlusion of the part and of the surface of visualization of the signals, superimposed on the real view.
[0186]
[0195]
[0196] Thus, the visualization method makes it possible to obtain a display in the augmented reality visualization device 16. The operator sees therein, through the visualization device 16, the mechanical part 7, the sensor-holder 8 and the sensor 3. In addition to these real objects, the display comprises a holographic 3D representation 7 of the mechanical part, a holographic 3D representation 8 of the sensor-holder, and a holographic 3D representation 3 of the sensor. These three holographic 3D representations are superimposed on the display of the real objects. In addition to the real objects and these holographic 3D representations superimposed on the real objects, the display comprises the holographic 3D representation of the occlusion 80 and the surface of intersection 81, superimposed on the holographic 3D representations superimposed on the real objects.
[0197] Hereinafter in the document, reference will be made to the following reference frames: [0198] Reference frame of the augmented reality visualization system under the designation R.sub.A: R.sub.RA. [0199] Reference frame of the target under the designation QRCode: R.sub.QR.
[0200] The visualization method according to the invention comprises, beforehand, a step 190 of calibration of the augmented reality visualization device 16 in the reference reference frame (R.sub.0).
[0201]
[0202] In order to define the position of the known elements in the world reference frame in the reference frame of the augmented reality visualization device, the augmented reality visualization device must be calibrated. The augmented reality visualization device establishes an anchor, or a marker, at the location of the QR code. During the calibration, this marker is positioned in the world reference frame, defined by the optical positioning system. The calibration makes it possible to establish the relationship between the two worlds.
[0203] In the world of the augmented reality visualization device, the reference frame R.sub.QR associated with the QR Code 86 is defined as
[0204] The vector in the world reference frame is:
[0205] The vector in the world reference frame is:
[0206] The vector in the world reference frame is:
[0207]
[0208] The calibration step 1000 defines and describes how to obtain by calibration the following transformation matrices: [0209] the transformation matrix making it possible to switch from the sensor reference frame RF to the world reference frame R.sub.0: .sup.FT.sub.0. By inversion of this matrix, .sup.OT.sub.F also applies. [0210] the transformation matrix making it possible to switch from the part reference frame R.sub.P to the world reference frame R.sub.0: .sup.PT.sub.0. By inversion of this matrix, .sup.OT.sub.P also applies. [0211] the transformation matrix making it possible to switch from the RA reference frame R.sub.RA to the world reference frame R.sub.0: .sup.RAT.sub.0. By inversion of this matrix, .sup.OT.sub.RA also applies.
[0212]
[0213] The origin F of the sensor 3 can be positioned in the part reference frame R.sub.P:
[0214] All the information I linked spatially to this point F such as the ultrasound paths and the sector scan reconstructed from the latter and from the physical signals can then be located in the reference frame of the augmented reality visualization device R.sub.RA:
[0215] In order to clarify the different elements of the 3D scene to be visualized, the 3D object containing the signals to be viewed by the operator can hereinafter be designated as main object and the 3D objects with respect to which the main object is positioned (notably the part and the sensor) can be designated as secondary objects. It should be noted that, with respect to augmented reality, each of the objects described previously is visualized in hologram form and is superimposed perfectly on a real object, thus creating an occlusion of this real object.
[0216] As already stated, the objective of the method of the invention is to allow the user to view the main object constructed from ultrasound data (sector scan for example) and to position it in 3D in augmented reality by giving the impression to the user of viewing the interior of the part.
[0217] For the human being to correctly visually interpret the positioning of this main object in the 3D visual space, the display of the hologram of this main object is not sufficient. The hologram will give the impression of floating and its spatial relationship with the volume of the part will be lost. In the particular case described here, the volume concerned is the volume passed through by the ultrasound signals that were used to construct this main object.
[0218] The invention consists in displaying the hologram of this main object accompanied by a set of graphic occlusions corresponding to the holograms of the secondary objects. The secondary objects are the part and the sensor (accompanied by its sensor-holder) in particular. For these occlusions not to totally conceal the real objects, the occlusions advantageously have a medium level of transparency. The graphic occlusions of the part (comprising possibly a weld to be inspected) and of the sensor are therefore produced with textures and colors that have a percentage of transparency.
[0219] In order for the brain to interpret the main object as forming part of the internal integrity of the part (and possibly of the weld), it is essential for this main object to appear on the graphic occlusion of the part. In order to achieve this objective, a transparent opening of the graphic occlusions of the part is produced. This is the occlusion 80. This opening is centered on the sensor (more specifically the point F of the sensor, the point with respect to which the main object to be displayed is located). This opening 80 is also a 3D object and a part of the surface of its volume coincides with the surface of the main object.
[0220] So as not to hamper the operator with information situated outside of his or her field of view (for example the back of the part), the visible data, and therefore the direction of the opening, are selected as a function of the position and of the orientation of the eye of the operator. Thus, if the operator views the holograms from the front or the rear of the part, the holograms remain well oriented. That is made possible by virtue of the calibration of the visualization device 16 with respect to the movement tracking device 1.
[0221] The step 240 of determination of the surface of intersection 81 comprises the following steps: [0222] Calculation (step 241) of the paths taken by the signal emitted by the sensor 3; [0223] Production (step 242) of a 3D meshing representative of the mechanical part 7, of the sensor-holder 8, of the non-destructive inspection sensor 3, and of the paths taken by the signal; [0224] Laying (step 243) of the paths taken by the signal on the 3D meshing.
[0225] In other words, the step 240 allows the visualization of a sector scan in the augmented reality visualization device 16. This is the surface of intersection 81 of the plane containing the axis of emission in the occlusion 80.
[0226] By virtue of the invention, it is thus possible for the operator to view the secondary objects (that is to say real objects: mechanical part 7 to be inspected, the sensor-holder 8 and the sensor 3), on which the main object containing the signals to be visualized, that is to say the assembly formed by the occlusion 80 and the surface 81, is superimposed.
[0227] In a particular embodiment, the steps of the method according to the invention are implemented by computer program instructions. Consequently, the invention also targets a computer program on an information medium, this program being able to be implemented in a computer, this program comprising instructions suited to the implementation of the steps of a method as described above.
[0228] This program can use any programming language, and be in the form of source code, object code, or of intermediate code between source code and object code, such as in a partially compiled form, or in any other desirable form. The invention also targets a computer-readable information medium, comprising computer program instructions suited to the implementation of the steps of a method as described above.
[0229] The information medium can be any entity or device capable of storing the program. For example, the medium can comprise a storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or even a magnetic storage means, for example a diskette or a hard disk.
[0230] Also, the information medium can be a transmissible medium such as an electrical or optical signal, which can be rerouted via an electrical or optical cable, wirelessly or by other means. The program according to the invention can in particular be downloaded over a network of Internet type.
[0231] Alternatively, the information medium can be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method according to the invention.
[0232] It will become apparent more generally to the person skilled in the art that various modifications can be made to the embodiments described above, in light of the teaching which has just been disclosed to him or her. In the claims which follow, the terms used should not be interpreted as limiting the claims to the embodiments set out in the present description, but should be interpreted to include therein all the equivalents that the claims aim to cover through the formulation thereof and the planning of which is within the scope of the person skilled in the art based on his or her general knowledge.