Image restoration method
10861166 · 2020-12-08
Assignee
Inventors
Cpc classification
International classification
Abstract
A method for restoring images in a sequence of images, including, when it is applied to a first image in the image sequence: estimating an item of information representing a global motion of a background of the first image with respect to a second image; compensating for said global motion of the background in the second image in order to obtain an adjusted version of the second image, referred to as the adjusted second image; obtaining a contour of an object of the first image by applying a segmentation method using the adjusted second image; using the contour of the object thus obtained in order to estimate an item of information representing a global motion of the object; and applying to the first image an image restoration method using the information representing the estimated global motion of the background and the estimated global motion of the object.
Claims
1. A method for restoring images in a sequence of images comprising a first image and a second image preceding said first image, said first and second images comprising an object in motion on a background, wherein the method comprises, when it is applied to a first image in the sequence of images: estimating an item of information representing a global motion of a background of the first image with respect to a second image; compensating for a global motion of the background in the second image using said item of information representing the global motion of the background of the first image in order to obtain an adjusted version of the second image, referred to as an adjusted second image; obtaining a contour of said object in the first image by applying a segmentation method, said segmentation method being iterative and comprising, during an iteration, a modification of the contour of the object in the first image obtained during a previous iteration of said segmentation method, referred to as a previous contour, so as to obtain the contour of the object in the first image, referred to as a current contour, such that a cost of the current contour is lower than a cost of the previous contour, a final contour of the object being obtained when a predefined condition for stoppage of said segmentation method is met, the cost of the contour of the object in the first image being a sum between a first value representing an energy internal to said contour of the object in the first image and a second value representing an energy external to said contour of the object in the first image, the energy external to said contour being equal to a weighted sum of an energy dependent on a global motion of the object between the first image and the adjusted second image and an energy, referred to as the contour energy, corresponding to a sum of values of gradient moduli calculated for pixels in a second set of pixels belonging to the current contour of the object; a value representing the energy dependent on a global motion of the object between the first image and the second image being calculated in the form of a sum of differences between values representing pixels in a first set of pixels of the first image belonging to the current contour and values representing pixels situated at the same spatial positions as the pixels in the first set of pixels in the second image; estimating an item of information representing a global motion of the object delimited by said contour obtained; and applying to the first image an image restoration method for replacing, for at least each pixel of the first image belonging to the object delimited by said contour obtained, each component of said pixel with a component equal to a weighted sum of said component of said pixel and of at least one component of a pixel of at least the second image matched with said pixel of the first image using the information representing the estimated global motion of the background and the estimated global motion of the object.
2. The method according to claim 1, wherein, in order to calculate the value representing the energy internal to the current contour, a first local derivative and a second local derivative of the contour are calculated for pixels in a third set of pixels of the current image belonging to the current contour of the object, said value representing the internal energy being a function of said calculated derivatives.
3. The method according to claim 2, wherein the first, second and third sets of pixels are identical, and each set comprises at least one subpart of the pixels of the current image belonging to the current contour of the object.
4. The method according to claim 1, wherein, during a first iteration of said method, an initial contour of the object in the current image is obtained from a final contour obtained during an application of the segmentation method to a reference image or from a contour specified by an operator in the reference image.
5. The method according to claim 1, wherein, during each estimation of an item of information representing a global motion, an item of information representing the form and the position of the object is obtained, said information representing the form and the position of the object being used for masking pixels that are not to be taken into account in said estimation.
6. The method according to claim 5, wherein, following the estimation of said item of information representing the global motion of the object, referred to as the first item of information, a filtering is applied to said first item of information in order to guarantee regular variations in the motion of the object between two successive images in the sequence of images, said filtering comprising the following steps: determining a first matrix for estimating a motion of the object in a reference frame centred on a barycentre of the object in the first image and a second matrix for estimating a motion of the object in a reference frame centred on a barycentre of the object in the adjusted second image; using the first and second matrices for calculating an item of information representing the motion of the object in said reference frame, referred to as the second item of information, from said first item of information; using the second item of information for obtaining a third matrix representing translation components of the motion of the object; using the second item of information and the third matrix for obtaining a fourth matrix representing components of the motion of the object other than the translation components; obtaining a filtered version of the third matrix, referred to as the filtered third matrix, by calculating a weighted sum between the third matrix and a previous filtered third matrix obtained when said method is implemented on the second image; obtaining a filtered version of the fourth matrix, referred to as the current filtered fourth matrix, by calculating a weighted sum between the fourth matrix and a previous filtered fourth matrix obtained when the method is implemented on the second image; and obtaining an item of information representing a filtered global motion of the object by using the first and second matrices, the current filtered third matrix and the current filtered fourth matrix.
7. The method according to claim 6, wherein the second item of information is calculated as follows:
8. The method according to claim 7, wherein the third matrix is calculated as follows:
H.sub.k.sup.t=ApproxT(V.sub.k.sup.1.Math.dH.sub.k.sup.Object.Math.V.sub.k-1) where H.sub.k.sup.t is the third matrix and ApproxT(X) is an approximation in translation of a homographic matrix X.
9. The method according to claim 8, wherein the fourth matrix is calculated as follows:
H.sub.k.sup.h=H.sub.k.sup.t-1.Math.V.sub.k.sup.1.Math.dH.sub.k.sup.Object.Math.V.sub.k-1 where H.sub.k.sup.h is the fourth matrix.
10. The method according to claim 9, wherein the current filtered third matrix is calculated as follows:
H.sub.k.sup.t.sup.
11. The method according to claim 10, wherein the current filtered fourth matrix is calculated as follows:
H.sub.k.sup.h.sup.
12. The method according to claim 11, wherein the item of information representing a filtered global motion of the object is calculated as follows:
dH.sub.k.sup.Object.sup.
13. A device for restoring images in a sequence of images comprising a first image and a second image preceding said first image, said first and second images comprising an object in motion on a background, wherein the device comprises, when it is applied to a first image in the sequence of images electronic circuitry for: estimating an item of information representing a global motion of a background of the first image with respect to a second image; compensating for a global motion of the background in the second image using said item of information representing the global motion of the background of the first image in order to obtain an adjusted version of the second image, referred to as an adjusted second image; obtaining a contour of said object of the first image by applying a segmentation method, said segmentation method being iterative and comprising, during an iteration, modifying the contour of the object in the first image obtained during a previous iteration of said segmentation method, referred to as a previous contour, so as to obtain the contour of the object in the first image, referred to as a current contour, such that a cost of the current contour is lower than a cost of the previous contour, a final contour of the object being obtained when a predefined condition for stoppage of said segmentation method is met, the cost of the contour of the object in the first image being a sum between a first value representing an energy internal to said contour of the object in the first image and a second value representing an energy external to said contour of the object in the first image, the energy external to said contour being equal to a weighted sum of an energy dependent on a global motion of the object between the first image and the adjusted second image and an energy, referred to as the contour energy, corresponding to a sum of values of gradient moduli calculated for pixels in a second set of pixels belonging to the current contour of the object; a value representing the energy dependent on a global motion of the object between the first image and the second image being calculated in the form of a sum of differences between values representing pixels in a first set of pixels of the first image belonging to the current contour and values representing pixels situated at the same spatial positions as the pixels in the first set of pixels in the second image; estimating an item of information representing a global motion of the object delimited by said contour obtained; and applying an image restoration method for applying to the first image an image restoration method for replacing, for at least each pixel of the first image belonging to the object delimited by said contour obtained, each component of said pixel with a component equal to a weighted sum of said component of said pixel and of at least one component of a pixel of at least the second image matched with said pixel of the first image using the information representing the estimated global motion of the background and the estimated global motion of the object.
14. A non transitory storage medium storing a computer program comprising instructions for implementing, by a device, the method according to claim 1, when said program is executed by a processor of said device.
Description
(1) The features of the invention mentioned above, as well as others, will emerge more clearly from a reading of the following description of an example embodiment, said description being given in relation to the accompanying drawings, among which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10) The invention is described hereinafter in a context where the display system comprises an image acquisition device, a processing module and an image display device. The invention can however be implemented in a context where the image acquisition device, the processing module and the display device are separate and distant geographically. In this case, the image acquisition device, the processing module and the image display device comprise communication means for communicating with each other.
(11) Moreover, the method according to the invention relies on an active contour based segmentation method based on contours. We show below that other types of active contour based segmentation methods can be used, such as for example the active contour based segmentation methods based on region, the segmentation methods based on implicit active contours based on level sets, etc.
(12) In addition, the images used in the context of the invention are essentially monochrome images where each pixel of an image has only one component. The invention can however be applied to multicomponent images wherein each pixel of an image has a plurality of components.
(13)
(14) In
(15) In one embodiment, the images supplied by the image acquisition device 51 are monochrome images.
(16) In one embodiment, the images supplied by the image display device 51 are multicomponent images.
(17)
(18) The example described in relation to
(19)
(20) According to the example of hardware architecture depicted in
(21) In an embodiment in which the image acquisition device 51, the processing module 52 and the display device 53 are separate and distant, the image acquisition device 51 and the display device 53 also comprise a communication interface able to communicate with the communication interface 525 by means of a network such as a wireless network.
(22) The processor 521 is capable of executing instructions loaded in the RAM 522 from the ROM 523, from an external memory (not shown), from a storage medium (such as an SD card), or from a communication network. When the processing module 52 is powered up, the processor 521 is capable of reading instructions from the RAM 522 and executing them. These instructions form a computer program causing the implementation, by the processor 521, of all or part of the method described below in relation to
(23) The method described below in relation to
(24)
(25) The method described in relation to
(26) In a step 41, the processing module 52 estimates an item of information representing a global motion of a background of the current image I.sub.k (or motion of the background) with respect to the previous image I.sub.k-1. The previous image I.sub.k-1 is then a reference image for the current image I.sub.k for estimating the item of information representing the motion of the background. This step is implemented by a global motion estimation method. A global motion estimation makes the assumption that a set of pixels in any one image moves in the same way. This motion may be simple, such as a translation or rotation motion, or complex represented for example by an affine transformation or a homography. A homography is an eight-parameter projective transformation of coordinates. In one embodiment, the processing module considers that the motion of the background between two successive images in the sequence of images is represented by a homography. Let (x, y) be coordinates of a pixel P.sub.k.sup.Background belonging to the background of the current image I.sub.k and (x, y) coordinates of the same pixel P.sub.k-1.sup.Background belonging to the background of the previous image I.sub.k-1. The estimation of the global motion made during the step 41 consists of determining the eight parameters of a homography making it possible to transform the coordinates (x, y) of each pixel P.sub.k-1.sup.Background in the previous image I.sub.k-1 in coordinates (x, y) of a pixel P.sub.k.sup.Background in the current image I.sub.k. By determining the eight parameters of the homography, an item of information is determined representing a global motion of the background between the previous image I.sub.k-1 and the current image I.sub.k.Math.dH.sub.k.sup.Background denotes the homography representing the global motion of the background between the previous image I.sub.k-1 and the image I.sub.k.
(27) In a step 42, the processing module 52 compensates for the motion of the background in the previous image I.sub.k-1 in order to obtain an adjusted previous image I.sub.k-1.sup.adj. To do this, the processing module 52 applies the homography dH.sub.k.sup.Background found during the step 41 to all the pixels of the previous image I.sub.k-1.
(28) In a step 43, the processing module 52 obtains a contour C of the object 4 by applying a segmentation method to the current image I.sub.k. We describe hereinafter, in relation to
(29) In a step 44, the processing module 52 estimates an item of information representing a global motion of the object 4, between the current image and a previous image. As during the step 41, the processing module 52 considers that the global motion of the object 4 is represented by a homography, denoted dH.sub.k.sup.Object. In one embodiment, the homography dH.sub.k.sup.Object representing the global motion of the object 4 is obtained using the current image I.sub.k and the previous image I.sub.k-1 and taking into account the motion of the background dH.sub.k.sup.Background measured during the step 41. Let (x, y) be coordinates of a pixel P.sub.k.sup.Object belonging to the object 4 in the current image I.sub.k and (x, y) coordinates of the same pixel P.sub.k-1.sup.Object belonging to the object 4 in the previous image I.sub.k-1. The estimation of global motion made during the step 44 comprises a determination of the eight parameters of a homography dH.sub.k.sup.measured making it possible to transform the coordinates (x, y) of each pixel P.sub.k.sup.Object in the current image I.sub.k in coordinates (x, y) of the pixel P.sub.k-1.sup.Object in the previous image I.sub.k-1. The following can be written:
dH.sub.k.sup.Measured=dH.sub.k.sup.Object.Math.dH.sub.k.sup.Background
(30) The representative homography dH.sub.k.sup.Object is then obtained as follows:
dH.sub.k.sup.Object=dH.sub.k.sup.measured.Math.(dH.sub.k.sup.Background).sup.1
(31) In one embodiment, the homography dH.sub.k.sup.Object representing the global motion of the object 4 is measured between an adjusted image and a non-adjusted image, which makes it possible not to involve the homography dH.sub.k.sup.Background. For example, the homography dH.sub.k.sup.Object is measured between the adjusted previous image I.sub.k-1.sup.adj and the current image I.sub.k.
(32) In a step 45, the processing module 52 applies an image restoration method to the current image I.sub.k. In one embodiment, the image restoration method applied uses the information representing the global motion of the background and the information representing the global motion of the object 4 estimated in the steps 41 and 44 in order to match the pixels of the current image I.sub.k and the pixels of the previous image I.sub.k-1. Let P.sub.k be a pixel of the current image I.sub.k and P.sub.k-1 a pixel of the previous image I.sub.k-1 matched with the pixel P.sub.k using the homographies dH.sub.k.sup.Background and dH.sub.k.sup.Object. The pixel P.sub.k (and respectively the pixel P.sub.k-1) has a non-zero positive integer number N.sub.c of components C.sub.i.sup.P.sup.
(33)
(34) In one embodiment, the image restoration method applied uses an image window comprising a number N.sub.I of images preceding the current image I.sub.k. The pixels of each of the images are matched using the information representing the motion of the background and the motion of the object 4 obtained for each of the images in the image window.
(35) In this embodiment, the value C.sub.i.sup.P.sup.
(36)
(37) In one embodiment, only the pixels of the current image I.sub.k belonging to the object 4 are restored.
(38) In one embodiment, a plurality of restoration methods are used according to the object to which each pixel of the image I.sub.k belongs. A first restoration method is applied to the pixels belonging to the background of the current image I.sub.k and a second restoration method is applied to the pixels belonging to the object 4 in the current image I.sub.k. The first restoration method is for example the restoration method using an image window wherein the image window comprises two images. The second restoration method is for example the restoration method using an image window wherein the image window comprises five images.
(39)
(40) The method in
(41) In the step 41, only the motion of the pixels corresponding to the background is sought. The motion of the pixels corresponding to the object 4 must now be taken into account.
(42) In a step 410, the processing module obtains a position and a form of the object 4 in the image I.sub.k.
(43) In one embodiment, the form and the position of the object 4 in the image I.sub.k are given by an ordered list of pixels, referred to as control points, belonging to a contour C of the object 4. The ordered list of control points may comprise all the pixels belonging to the contour C of the object 4 or a subset of pixels of the contour C making it possible to obtain a good approximation of the contour C. Running through the ordered list of control points makes it possible to obtain the contour C.
(44) In one embodiment, in the step 410, the processing module 52 makes an assumption of small motion of the object 4 between two successive images in the sequence of images. As stated above, the method described in relation to
(45) The first image in the sequence I.sub.k=0 is a particular case since this image is not preceded by any other image. In one embodiment, the position and the form of the object 4 in the first image I.sub.0 in the sequence of images are given by an operator. To do this, the operator can outline the object 4 by means of a pointing device, such as a mouse, on the display device 53, which in this case is a touch screen. The background of the first image is considered to have a zero motion. Consequently the motion of the background has not been compensated for the first image in the sequence. The method described in relation to
(46) In a step 411, the position and the form of the object in the current image I.sub.k being known, the processing module masks each pixel of the current image I.sub.k belonging to the object 4, i.e. the processing module masks each pixel belonging to the contour or internal to the contour of the object 4. In one embodiment, the processing module 52 associates each pixel of the current image I.sub.k with a first mask value when said pixel is masked and with a second mask value when said pixel is not masked. The first mask value is for example the value 1 and the second mask value is for example the value 0.
(47) In a step 412, the processing module 52 estimates the global motion of the background between the current image I.sub.k and the previous image I.sub.k-1 (i.e. the processing module 52 estimates the homography dH.sub.k.sup.Background). During this estimation only the pixels of the image I.sub.k that are not masked (i.e. the pixels associated with the second mask value) are taken into account. In addition, only the pixels of the previous image I.sub.k-1 that are not masked are taken into account, using the mask obtained during the application of the method described in relation to
(48) In the step 44, only the motion of the pixels corresponding to the object 4 is sought. The motion of the pixels corresponding to the background must not be taken into account.
(49) In the step 44, the processing module performs the step 410 of obtaining the position and the form of the object 4. The position and the form of the object 4 in the current image I.sub.k are obtained by means of the result of the step 43
(50) In the step 411, the processing module 52 masks each pixel of the current image I.sub.k. belonging to the background, i.e. not belonging to the object 4.
(51) In the step 412, the processing module 52 estimates the motion of the object between the current image I.sub.k and the previous image I.sub.k-1 (i.e. the processing module estimates the homography dH.sub.k.sup.measured and then deduces therefrom the homography dH.sub.k.sup.Object). During this estimation only the pixels of the current image I.sub.k and of the previous image that are not masked are taken into account. Once again, the determination of the eight parameters of the homography dH.sub.k.sup.measured uses the projective fit method or the projective flow method.
(52) In one embodiment, when the step 410 is performed during the step 41, the processing module 52 makes an assumption of continuous motion of the object 4 in the sequence of images. The assumption of continuous motion means that the motion of the object 4 between the current image I.sub.k and the previous image I.sub.k-1 is the same as the motion of the object 4 between the previous image I.sub.k-1 and an image I.sub.k-2 preceding the previous image I.sub.k-1. The method described in relation to
(53) In one embodiment, in order to take into account the fact that the small-motion and continuous-motion assumptions make it possible to obtain only an approximation of the form and the position of the object 4 in the current image I.sub.k, an expansion is applied to the contour of the object 4 in the current image I.sub.k. The expansion is obtained for example by using a mathematical morphology method.
(54) In other embodiments, other known methods for estimating parameters of a homography can be used.
(55) In other embodiments, the processing module considers that the motion between two successive images in the sequence of images is represented by other motion models such as a translation, a rotation, an affine transformation or a bilinear transformation.
(56) In one embodiment, prior to each estimation of global motion (of the background or of the object formed), each image involved in the global motion estimation is interpolated to a half, a quarter or an eighth of a pixel. In this way, the precision of the global motion estimation is improved.
(57) The projective fit method (and respectively the projective flow method) consists of finding, among a set of parameters of a motion model (here the eight parameters of a homography), the parameters of the motion model minimising a metric representing an error between a real motion of an object in an image and a motion of the object represented by the motion model. In the projective fit method (and respectively the projective flow method), each possible combination of parameters of the motion model is tested. Such an exhaustive method for the search for parameters of the motion model may have a high computing cost. It is possible to reduce the computing cost of the projective fit method (and respectively of the projective flow method) by using, for example, a gradient descent algorithm rather than an exhaustive search. However, a known problem with gradient descent methods is that, when the metric to be minimised has a plurality of local minima, the gradient descent method may converge towards a local minimum that is not a global minimum, i.e. that is not the minimum value that the metric can take. One method for ensuring a rapid convergence towards the global minimum of the metric consists of initialising the gradient descent method with a value close to the global minimum sought. In one embodiment, the exhaustive search for parameters of the motion model of the projective fit method (and respectively of the projective flow method) is replaced by a gradient descent method during the implementation of the step 412 during the steps 41 and 44.
(58) When the step 41 is implemented on the image I.sub.k, the gradient descent method is initialised to a value dH.sub.k-1.sup.Background representing the motion of the background found for the image I.sub.k-1. More precisely, the eight parameters of the homography dH.sub.k-1.sup.Background representing the motion of the background between the previous image I.sub.k-1 and the previous image I.sub.k-2 are used for initialising the eight parameters of the homography dH.sub.k.sup.Background representing the motion of the background between the current image I.sub.k and the previous image I.sub.k-1 in the gradient descent method.
(59) Likewise, when the step 44 is implemented on the image I.sub.k, the gradient descent method is initialised to a value (dH.sub.k-1.sup.Object*dH.sub.k.sup.Background). More precisely, the eight parameters of the homography (dH.sub.k-1.sup.Object*dH.sub.k.sup.Background) are used for initialising the eight parameters of the homography dH.sub.k.sup.measured representing the motion of the object 4 measured between the current image I.sub.k and the previous image I.sub.k-1 in the gradient descent method. The homography dH.sub.k.sup.Object is next deduced from the homography dH.sub.k.sup.measured and the homography dH.sub.k.sup.Background.
(60) In one embodiment, following the estimation of the information representing the motion of the object 4 between the current image I.sub.k and the adjusted previous image I.sub.k-1 (i.e. following the estimation of the eight parameters of the homography dH.sub.k.sup.Oject), the information representing the estimated motion is filtered in order to guarantee regular variations of the motion of the object between two successive images in the sequence of images. The method described in relation to
(61)
(62) The method described in relation to
(63) In a step 800, the processing module 52 determines a passage matrix V.sub.k (and respectively a passage matrix V.sub.k-1) for estimating a motion of the object in a reference frame centred on a barycentre of the object 4 in the current image I.sub.k (and respectively in the adjusted previous image I.sub.k-1.sup.adj).
(64)
(65) In a step 801, the processing module 52 calculates information
(66)
(67) The coefficients of the matrix
(68) In the step 802, the processing module 52 obtains a matrix H.sub.k.sup.t representing translation components of the motion of the object 4 between the current image I.sub.k and the adjusted previous image I.sub.k-1.sup.adj (represented by the homography
(69)
(70) In a step 803, the processing module 52 obtains a matrix H.sub.k.sup.h representing components of the motion of the object 4 between the current image I.sub.k and the adjusted previous image I.sub.k-1.sup.adj other than the translation components as follows:
H.sub.k.sup.h=H.sub.k.sup.t.sup.
(71) The components of the motion of the object 4 other than the translation components may for example be rotation, zoom, etc. components.
(72) In a step 804, the processing module 52 filters the translation components of the motion of the object 4 between the current image I.sub.k and the adjusted previous image I.sub.k-1.sup.adj as follows:
H.sub.k.sup.t.sup.
(73) In a step 805, the processing module 52 filters the components of the motion of the object 4 between the current image I.sub.k and the adjusted previous image I.sub.k-2.sup.adj other than the translation components as follows:
H.sub.k.sup.h.sup.
(74) In a step 806, the processing module 52 determines an item of information dH.sub.k.sup.Object.sup.
dH.sub.k.sup.Object.sup.
(75) In the embodiment wherein the estimated motion is filtered in order to guarantee small variations in the motion of the object between two successive images in the sequence of images, the filtered global motion of the object 4 represented by the filtered homography dH.sub.k.sup.Object.sup.
(76)
(77) The method described in relation to
(78) In a step 431, the processing module 52 obtains an initial contour C of the object 4 in the current image I.sub.k.
(79) In one embodiment, during the step 431, the processing module makes the assumption of a small motion of the object 4 between a current image I.sub.k and the previous image I.sub.k-1. In this case, as at the performance of the step 411 during the step 44, the processing module 52 re-uses the ordered list of control points determined when the method described in relation to
(80) In a step 432, the processing module 52 calculates a cost of the current contour C by applying a method that we describe hereinafter in relation to
(81) In a step 433, the processing module 52 checks whether a condition for the stoppage of the active contour based segmentation method is fulfilled. In one embodiment, said iterative method stops when a number of iterations of the active contour based segmentation method reaches a maximum number of iterations.
(82) When the stop condition is fulfilled, the active contour based segmentation method ends in the step 434 and the processing module 52 performs the step 44 already explained.
(83) When the stop condition is not fulfilled, the processing module 52 implements a step 435. During the step 435, the processing module 52 implements a procedure for refinement of the contour C of the object 4 obtained during the preceding iteration of the active contour based segmentation method. During the step 435, the processing module 52 modifies the contour C of the object 4 obtained during the preceding iteration of the active contour based segmentation method, referred to as the previous contour, so as to obtain a contour C of the object, referred to as the current contour, such that a cost of the current contour is less than a cost of the previous contour. The modification of the contour C uses, for example, a method described in the article Snakes: Active Contour Models Michael Kass, Andrew Witkin, Demetri Terzopoulos, International Journal of Computer Vision, 321-331 (1988), 1987 Kluwer Academic Publishers, Boston.
(84) The step 435 is followed by the step 432.
(85) In one embodiment, in the step 431, the processing module makes the assumption of continuous motion of the object 4 between the current image I.sub.k and the previous image I.sub.k-1. In this case, the processing module 52 moves the control points in the ordered list of control points in order to obtain the initial contour C of the object 4 in the current image I.sub.k. These control points, determined when the method described in relation to
(86) In one embodiment, the control points in the ordered list of control points are moved by the filtered motion of the object 4 represented by the homography dH.sub.k-1.sup.Background*dH.sub.k-1.sup.Object.sup.
(87)
(88) The method described in relation to
(89)
(90) In a step 4322, the processing module 52 calculates an external energy E.sub.ext of the contour C as follows:
E.sub.ext=(W.sub.cont.Math.E.sub.cont+W.sub.mvt.Math.E.sub.mvt) where W.sub.cont and W.sub.mvt are predefined constants for example equal to 1. E.sub.edge is an energy, referred to as the contour energy, calculated on a gradient modulus image I.sub.k.sup.grad obtained from the current image I.sub.k:
(91)
(92) It should be noted that various methods for calculating a gradient modulus image are applicable here. In order to obtain the image of the gradient modulus I.sub.k.sup.grad it is possible for example to apply to each pixel of the image I.sub.k: a linear combination of pixels adjacent to said pixel, each adjacent pixel being weighted by a weight, the sum of said weights being equal to zero, and then calculating the amplitude (i.e. the modulus) of this linear combination; a Sobel filter; a Canny filter; etc.
(93) In one embodiment, the image I.sub.k.sup.grad is not calculated, and the gradient modulus values used in the calculation of the contour energy E.sub.edge are calculated solely at the positions of the end control points PC.sub.i.
(94) E.sub.mvt is an energy dependent on the motion of the object 4 between the current image I.sub.k and the preceding adjusted image I.sub.k-1.sup.adj:
(95)
(96) In a step 4323, the processing module 52 calculates the cost J of the current contour C as follows:
J=E.sub.ext+E.sub.int
(97) It is therefore found that the motion of the object 4 is taken into account in the segmentation method according to the invention, which makes it possible to obtain a better segmentation of the object. The minimisation of the cost J makes it possible to maximise E.sub.mvt and k.sub.cont on the control points of the contour, in order to favour the zones with high spatial and/or temporal gradients.
(98) The principle of the invention remains the same in the case of a use of a type of active contour based segmentation method other than the contour based active contour based segmentation methods. Each active contour based segmentation method comprises an estimation of an external energy E.sub.ext. However, since the active contour based segmentation methods are suited to fixed images, they do not take into account the motions in a sequence of images during segmentation. The invention makes it possible to take into account these motions by integrating an energy representing the motion in the estimation of the external energy E.sub.ext. This principle applies to the external energies E.sub.ext calculated in the context of the region based active contour based segmentation methods and active contour based segmentation methods based on steps of levels.
(99) Until now we have considered images comprising only one object. The invention applies when the images in the sequence of images comprise a plurality of objects. In the steps 41 to 42, each object is masked during the estimation and the compensation for the motion of the background in an image. The steps 43, 44 and 45 are implemented independently on each object.
(100) Moreover, until now, we have considered that the object 4 was rigid and that consequently the apparent form of the object was approximately constant. In a real case, depending on the motions of the object and/or of the camera, the object may be seen from different viewing angles, which may cause deformations in the apparent form of the object. In one embodiment, where a variation in the form of the object on a plurality of successive images in the sequence of images exceeds a predefined threshold, the processing module 52 considers that the object appearing in the images has changed. In this case, when the processing module 52 detects a change of object, it considers that a new sequence of images is starting and invites the operator to outline the object again. In another embodiment, the processing module 52 applies the segmentation method described in relation to
(101) In one embodiment, when the images supplied by the image acquisition device are multicomponent images, the processing module 52 applies the restoration method described in relation to
(102) In one embodiment, when the images supplied by the image acquisition device are multicomponent images, the processing module 52 applies the restoration method described in relation to