A METHOD FOR CONTROLLING A SURFACE
20200139552 · 2020-05-07
Inventors
- Samuel Louis Marcel Marie Maillard (Moissy-Cramayel, FR)
- Nicolas SIRE (LA GENETOUZE, FR)
- Benoît Bazin (Moissy-Cramayel, FR)
- Grégory CHARRIER (LE POIRE SUR VIE, FR)
- Nicolas LECONTE (MOISSY-CRAMAYEL, FR)
Cpc classification
Y02P90/02
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
G05B2219/50064
PHYSICS
G06T17/10
PHYSICS
G05B2219/40617
PHYSICS
International classification
Abstract
The invention relates to a method for controlling a surface (1) of interest of a part (2) by means of a camera (3) intended to be mounted on a robot (4), the camera (3) comprising a sensor and optics associated with an optical centre C, an angular aperture alpha and a depth of field PC and defining a sharpness volume (6), this method comprising the following operations: loading a three-dimensional virtual model of the surface (1); generating a three-dimensional virtual model of the volume of sharpness (6); paving the model of the surface (1) by means of a plurality of unit models of said three-dimensional virtual model of the volume of sharpness (6); for each position of said unit models (6), calculating the corresponding position, called the acquisition position, of the camera (3).
Claims
1.-8. (canceled)
9. A method for controlling a surface of interest of a part by means of a camera intended to be mounted on a carrying robot, the camera comprising a sensor and optics associated with an optical centre (C), with an angular aperture alpha and with a depth of field (PC) and defining a sharpness volume, the method comprising the following operations: a) loading, in a virtual design environment, a three-dimensional virtual model of the surface of interest, b) generating, in the virtual environment, a three-dimensional virtual model of the sharpness volume, c) paving, in the virtual environment, the model of the surface of interest by means of a plurality of unit models of said three-dimensional virtual model of the sharpness volume, d) for each position of said unit models, calculating the corresponding position, called the acquisition position, of the camera.
10. The method according to claim 9, wherein the generation of the three-dimensional virtual model of the sharpness volume comprises the operations of: loading, in the virtual environment, a three-dimensional model of the camera and its tooling, generating a truncated pyramid of which: the top is the optical centre (C), the angular aperture is that of the optics noted alpha, two opposing faces each define a first sharp plane (PPN) and a last sharp plane(DPN), the spacing of which corresponds to the depth of field (PC) of the optics.
11. A method according to claim 10, wherein the surface is located between the first sharp plane (PPN) and the last sharp plane (DPN) of each unit model sharpness volume model.
12. A method according to claim 10, in which the generation of the three-dimensional virtual model of the sharpness volume comprises an operation of dividing the sharpness volume model into a working area strictly included therein, and a peripheral overlapping area surrounding the working area; and in that in the paving operation, the unit models of the sharpness volume model are distributed so as to overlap two by two in said peripheral areas.
13. A method according to claim 11, in which the generation of the three-dimensional virtual model of the sharpness volume comprises an operation of dividing the sharpness volume model into a working area strictly included therein, and a peripheral overlapping area surrounding the working area; and in that in the paving operation, the unit models of the sharpness volume model are distributed so as to overlap two by two in said peripheral areas.
14. A method according to claim 9, wherein in the paving operation, the position of each unit model of the three-dimensional virtual model of volume of sharpness is defined at least by the distance d between a singular point P of the three-dimensional model of the surface of interest and its orthogonal projection on one of the planes (PPN) or (DPN).
15. A method according to claim 10, wherein in the paving operation, the position of each unit model of the three-dimensional virtual model of volume of sharpness is defined at least by the distance d between a singular point P of the three-dimensional model of the surface of interest and its orthogonal projection on one of the planes (PPN) or (DPN).
16. A method according to claim 11, wherein in the paving operation, the position of each unit model of the three-dimensional virtual model of volume of sharpness is defined at least by the distance d between a singular point P of the three-dimensional model of the surface of interest and its orthogonal projection on one of the planes (PPN) or (DPN).
17. A method according to claim 12, wherein in the paving operation, the position of each unit model of the three-dimensional virtual model of volume of sharpness is defined at least by the distance d between a singular point P of the three-dimensional model of the surface of interest and its orthogonal projection on one of the planes (PPN) or (DPN).
18. A method according to claim 14, wherein the singular point P is the barycenter of the three-dimensional virtual model of volume of sharpness.
19. A method according to claim 9, wherein in the paving operation, the position of each unitary sharpness volume model is defined by the angle between an X-axis associated with the sharpness volume model and the normal N to the surface of interest at the point of intersection of the X-axis and the surface.
20. A method according to claim 10, wherein in the paving operation, the position of each unitary sharpness volume model is defined by the angle between an X-axis associated with the sharpness volume model and the normal N to the surface of interest at the point of intersection of the X-axis and the surface.
21. A method according to claim 11, wherein in the paving operation, the position of each unitary sharpness volume model is defined by the angle between an X-axis associated with the sharpness volume model and the normal N to the surface of interest at the point of intersection of the X-axis and the surface.
22. A method according to claim 12, wherein in the paving operation, the position of each unitary sharpness volume model is defined by the angle between an X-axis associated with the sharpness volume model and the normal N to the surface of interest at the point of intersection of the X-axis and the surface.
23. A method according to claim 14, wherein in the paving operation, the position of each unitary sharpness volume model is defined by the angle between an X-axis associated with the sharpness volume model and the normal N to the surface of interest at the point of intersection of the X-axis and the surface.
24. A method according to claim 19, wherein the X-axis is an axis of symmetry of the sharpness volume model.
Description
[0030] The invention will be better understood and other details, characteristics and advantages of the invention will become readily apparent upon reading the following description, given by way of a non limiting example with reference to the appended drawings, wherein:
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041] The present invention relates to a method for controlling a surface 1 of interest of a part 2 by means of a camera 3 mounted on a carrier robot 4. The mounting of the camera 3 on the carrier robot 4 can for example be carried out using tooling 5 as shown in
[0042] The part 2 can for example be a mechanical part.
[0043] The camera 3 comprises a sensor and optics associated with an optical centre C, an angular aperture and a depth of field PC and defining a sharpness volume 6, as shown in
[0044] The method includes the steps of: [0045] loading, in a virtual design environment (e.g. a virtual computer-aided drafting environment), a three-dimensional virtual model of the surface 1 of interest, as illustrated in
[0049] For each position of said unit models, it is then possible to automatically calculate passage points for the robot, and consequently a predefined trajectory allowing it to successively move the camera at the acquisition points.
[0050] For each position of a unit model of the three-dimensional virtual model of the sharpness volume 6, the position of the optical axis of the corresponding camera 3 differs. Three optical axes, Y, Y and Y are shown as examples in
[0051] According to a preferred embodiment, the generation of the three-dimensional virtual model of sharpness volume 6 includes the operations of: [0052] loading, a three-dimensional model of the camera 3, [0053] generating a truncated pyramid of which: [0054] the top is the optical center C of the camera 3, [0055] the angular aperture is that of the optics, noted alpha, [0056] two opposite sides define a first sharp plane PPN and a last sharp plane DPN, respectively, whose spacing corresponds to the depth of field PC of the optics.
[0057]
[0058] According to a special feature, the surface 1 is located, during paving, between the first sharp plane PPN and the last sharp plane DPN of each unit model of the three-dimensional virtual model of the sharpness volume 6, as shown in
[0059] The geometric characteristics of the camera 3 are supplier data. These include: [0060] the dimensions in pixels of an image provided by the camera 3: the number n.sub.h of horizontal pixels, the number n.sub.v of vertical pixels, [0061] the distance p between the centers of two adjacent pixels on the sensor, [0062] the focusing distance I, [0063] the angular aperture of the optics.
[0064] The focusing distance I is user-defined. The geometry of the sharpness volume 6 can be adjusted by a calculation making it possible to manage overlapping areas 7.
[0065] Each position of a unit model of the three-dimensional virtual model of sharpness volume 6 on the surface 1 corresponds to a shooting position.
[0066] Thus, in the course of this operation, the generation of the three-dimensional virtual model of the sharpness volume 6 may additionally include an operation of dividing the three-dimensional virtual model of the sharpness volume 6 into a working area 8 strictly included therein, and an overlapping peripheral area 7 surrounding the working area 8. An example of a sharpness volume 6 divided into a working area 8 and an overlapping area 7 is shown in
[0067] The geometry and dimensions of the working area 8 are governed by the geometry of the generated sharpness volume 6 and a parameter for the desired percentage of overlapping in each image. This parameter can be modulated by an operator. This dividing step makes it easy to manage the desired level of overlapping between two acquisitions.
[0068] For each type of sensor, equations are used to calculate the dimensions of the working area 8.
[0069] As an example, the following equations are given for applications in the visible range and in particular when using silver sensors.
[0070] The calculation of the working area at a focusing distance I is governed by the equations (1) and (2), which calculate the horizontal field of view (HFOV) and the vertical field of view (VFOV) in millimetres, respectively:
[0071] n.sub.h being the number of horizontal pixels, n.sub.v the number of vertical pixels and p the distance between the centers of two adjacent pixels on the acquired images.
[0072] The depth of field PC is the difference between the distance from C to the last sharp plane DPN, noted [C, DPN], and the distance from C to the first sharp plane PPN, noted [C,PPN], as shown in equation (3):
PC=[C,DPN][C,PPN](3)
[0073] The equations for determining distances [C,DPN] and [C,PPN] vary depending on the sensor. For example, for a silver film camera, these distances are calculated by the equations (4) and (5) where D is the diagonal of the sensor calculated by the equation (6), c is the perimeter of the circle of confusion defined by the equation (7), and H is the hyperfocal distance:
[0074] The variables calculated by the equations (4) to (8) may vary depending on the type of sensor used. They are given here as an example.
[0075] In the case where the operator has selected a non-zero overlap percentage, the positions of the sharpness volume 6 are set to overlap two by two in the overlap areas 7 during the paving operation of the surface 1. An example of overlapping between the sharpness volumes 6 is shown in
[0076] The use of a sharpness volume allows a control of the viewing area and facilitates the integration of certain constraints such as the distance between the camera 3 and the surface 1, the normality to the surface, the centering on a particular point of the surface 1, the control of the working area 8 and the overlapping area 7.
[0077] According to a particular feature, the position of each unit model of the three-dimensional virtual model of the sharpness volume 6 is defined at least by a distance d which can be the distance d1 between a singular point P of the three-dimensional model of the surface 1 of interest and its orthogonal projection on the plane PPN, as shown in