Method for measuring objects in digestive tract based on imaging system
11538151 · 2022-12-27
Assignee
Inventors
Cpc classification
G06T7/80
PHYSICS
International classification
G06T7/80
PHYSICS
Abstract
A method for measuring objects in a digestive tract based on an imaging system is provided. The imaging system captures a detection image in the measurement stage. The depth distance z.sub.i from a target point P′ to a board is calculated, a correction factor is obtained according to the predicted brightness g.sup.−1(z.sub.i) of a reference point P in the detection image. The depth image z(x, y) from the actual position of each pixel to the board is calibrated by the correction factor. The scale r of each pixel is calculated according to the depth image z(x, y). The actual two-dimensional coordinates S.sub.i′ of each pixel are calculated by the scale r, and the actual three-dimensional coordinates (S.sub.i′, z(x, y)) of each pixel are obtained. The distance between any two pixels in the detection image or the area within any range are calculated.
Claims
1. A method for measuring objects in a digestive tract based on an imaging system, comprising: simulating an environment in the digestive tract and starting a calibration process of the imaging system; said calibration process, comprising setting a plurality of calibration points Q′ on a transparent enclosure of the imaging system; controlling a photographing unit of the imaging system to photograph and form a calibration image, and recording the calibration point Q′ imaged in the calibration image as an imaging point Q; calculating and determining a relationship between a relative angle θ of the calibration point Q′ relative to the optical axis of the photographing unit and a pixel distance Δq′ from the imaging point Q to the center of the calibration image, and recording it as:
θ=f(Δq′) (1); calculating and determining a relationship between a brightness φ of any pixel in the calibration image and a depth distance z from an actual position of the pixel in the simulated digestive tract to a board where the photographing unit is disposed, and recording it as:
z(x,y)=g(φ(x,y)) (2); calculating a relationship between a scale r of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the board where the photographing unit is disposed, where scale is the actual length represented by unit pixel in the image, and recording it as:
r=dz (3); performing a measurement process after the calibration process is completed; said measurement process, comprising placing the imaging system in the digestive tract; capturing and obtaining a detection image; determining a region in the detection image where the transparent enclosure contacts with the digestive tract wall, and recording it as a contact region, and setting at least one reference point P in the contact region, and recording the actual position of the reference point in the digestive tract as a target point P′; calculating a pixel distance Δp from the reference point P to the center of the detection image separately, and putting it into equation 1 to obtain a relative angle θ of the target point P′ relative to the optical axis of the photographing unit; calculating an actual distance from the target point P′ to the board separately and recording it as depth distance z.sub.i; obtaining a predicted brightness g.sup.−1(z.sub.i) of the reference point P in the detection image according to equation 2 and the depth distance z.sub.i; comparing the predicted brightness g.sup.−1(z.sub.i) of the reference point P with an actual pixel brightness img(P.sub.i) of the reference point P to obtain a correction factor k.sub.i, and recording it as:
z(x,y)=g(
2. The method of claim 1, wherein the step “determining the region in the detection image where the transparent enclosure contacts with the digestive tract wall” comprises: selecting an edge part of the detection image away from the center of the detection image; obtaining a brightness T of each pixel in the edge part; gathering pixels in a region which the brightness T is greater than a threshold τ as the contact region.
3. The method of claim 2, wherein the step “selecting the edge part of the detection image away from the center of the detection image” comprises: marking an inner ring on the detection image that is centered on the center of the detection image, wherein the inner ring is close to the edge of the detection image and does not intersect; marking an outer ring on the detection image that is centered on the center of the detection image, wherein the outer ring intersects with the edge of the detection image; and recording the part enclosed by the inner ring, the outer ring, and the image edge as the edge part.
4. The method of claim 1, wherein the digestive tract comprises a plurality of regions and the imaging system comprises a plurality of exposure levels, and wherein, after the step “obtaining the mean value
5. The method of claim 4, wherein after two or more mean values
6. The method of claim 1, wherein the step “calculating the actual distance from the target point P′ to the board separately and recording it as depth distance z.sub.i of the reference point P” comprises: obtaining the radius R of a front enclosure of the transparent enclosure; calculating a distance R cos θ from the target point P′ to the spherical center of the front enclosure separately; obtaining the axial length H of an annular enclosure of the transparent enclosure; calculating the depth distance z.sub.i=R cos θ+H from the target point P′ to the board separately.
7. The method of claim 1, wherein the step “obtaining the depth distance from the actual position of each pixel in the digestive tract to the board” or “integrating to obtain the actual three-dimensional coordinates (S.sub.i′,z(x,y)) of each pixel” further comprises: determining a value of the depth distance z of each pixel; when t.sub.1≤z≤t.sub.2, it is determined that the pixel is within an effective section of the detection image; when z<t.sub.1 or z>t.sub.2, it is determined that the pixel is within an ineffective section of the detection image.
8. The method of claim 7, wherein the step “calculating or measuring the distance between any two pixels in the detection image or the area within any range” is followed by: calculating a straight-line distance between any two pixels selected by a user in the effective section according to the three-dimensional coordinates of the two pixels; or, building a three-dimensional image of any area according to the three-dimensional coordinates of pixels in the area selected by a user in the effective section, and calculating a straight-line distance between any two pixels selected by the user from the three-dimensional image; or, calculating the area of any region selected by a user in the effective section according to the three-dimensional coordinates of the region; or, forming a scale in the effective section, and marking graduations on the scale as those of actual length; or, identifying the lesion region in the effective section automatically, and calculating the size or area of the region.
9. The method of claim 7, wherein t.sub.1=0, and t.sub.2=60 mm.
10. The method of claim 1, wherein the step “capturing and obtaining a detection image” comprises: controlling the imaging system to capture and obtain an image; correcting a radial distortion of the captured image and forming a detection image, and recording it as:
img_out(x,y)=img_in(x(1+l.sub.1R.sup.2+l.sub.2R.sup.4),y(1+l.sub.1R.sup.2+l.sub.2R.sup.4)) (6); where, R=√{square root over (x.sup.2+y.sup.2)} represents the pixel distance from the pixel to the center of the detection image, l.sub.1 and l.sub.2 represent distortion parameters of the imaging system, x represents x-coordinate of the pixel, y represents y-coordinate of the pixel, img_in represents input image, and img_out represent corrected image.
11. A measuring system for objects in digestive tract based on an imaging system, comprising: one or more computer processors configured: to identify a contact region and a reference point P; to calculate the relationship between a relative angle θ of a calibration point Q′ relative to an optical axis of a photographing unit of the imaging system and a pixel distance Δq′ from an imaging point Q to a center of a calibration image and record as equation 1, and to calculate a relationship between a brightness φ of any pixel in the calibration image and the depth distance z from the actual position of the pixel in a simulated digestive tract to the imaging system, and record it as equation 2, and to calculate and determine the relationship between a scale r of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the imaging system, and record it as equation 3;
θ=f(Δq′) (1);
z(x,y)=g(φ(x,y)) (2);
r=dz (3); to identify the brightness of any pixel or all pixels in the calibration image or a detection image; to obtain equation 1 and the pixel distance Δp from the reference point P to the center of the detection image to calculate the relative angle θ of a target point P′ relative to the optical axis of the photographing unit, and to calculate actual distance from the target point P′ to a board where the photographing unit is disposed and record it as depth distance z.sub.i, and to obtain the depth distance z.sub.i, equation 2 and the actual pixel brightness of the reference point P to calculate a correction factor k.sub.i, and to obtain equation 2 to calculate and obtain the depth distance z(x, y) from the actual position of each pixel in the digestive tract to the board, and to obtain equation 3 to calculate the actual two-dimensional coordinates S.sub.i′, and integrate to obtain the actual three-dimensional coordinates of (S.sub.i′, z(x, y)) of each pixel.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
DETAILED DESCRIPTION
(5) In order to enable those skilled in the art to better understand the technical solutions disclosed, the present invention can be described in detail below with reference to the accompanying drawings and preferred embodiments. However, the embodiments are not intended to limit the invention, and obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of them. All other embodiments obtained by those having ordinary skill in the art without creative work based on the embodiments of the present invention are included in the scope of the present invention.
(6) Referring to
(7) Moreover, since the imaging system, especially the capsule endoscope in the embodiment, is in the digestive tract, the transparent enclosure generally comes into contact with the inner wall of the digestive tract. Especially in esophagus and large intestine, the lumen space is small because of insufficient water. During the peristalsis of colon and swallowing of the esophagus, the inner wall of colon and esophagus can wrap and squeeze the capsule endoscope, and the inner wall can usually come into contact with the transparent enclosure. In the small intestine, due to its curved structure, smaller inner diameter, and more frequent contraction, the inner wall can also contact with the transparent enclosure. Therefore, in the esophagus, small intestine, and large intestine, it can be assumed that the inner wall of the digestive tract is in contact with the imaging system, so that the captured image has a part where the transparent enclosure contacts with the digestive tract.
(8) Specifically, the measurement method comprises:
(9) simulating an environment of the digestive tract and entering a calibration stage of the imaging system;
(10) setting a plurality of calibration points Q on the transparent enclosure of the imaging system;
(11) controlling the photographing unit 102 of the imaging system to photograph and form a calibration image, and recording the calibration point Q′ imaged in the calibration image as an imaging point Q;
(12) calculating and determining the relationship between the relative angle θ of the calibration point Q′ relative to the optical axis of the photographing unit 102 and the pixel distance Δq′ from the imaging point Q to the center of the calibration image, and recording it as:
θ=f(Δq′) (1);
(13) calculating and determining the relationship between the brightness φ of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the board 104 where the photographing unit 102 is disposed, and recording it as:
z(x,y)=g(φ(x,y)) (2);
(14) calculating the relationship between the scale r of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the board 104 where the photographing unit 102 is disposed, where scale is the actual length represented by unit pixel in the calibration image, and recording it as:
r=dz (3);
(15) entering a measurement stage after calibration is completed;
(16) placing the imaging system in the digestive tract;
(17) capturing and obtaining a detection image;
(18) determining the region in the detection image where the transparent enclosure contacts with the digestive tract wall, and recording it as a contact region, and setting at least one reference point P in the contact region, and recording the actual position of the reference point in the digestive tract as a target point P′;
(19) calculating the pixel distance Δp from the reference point P to the center of the detection image separately, and putting it into equation 1 to obtain the relative angle θ of the target point P′ relative to the optical axis of the photographing unit 102;
(20) calculating the actual distance from the target point P′ to the board 104 separately and recording it as a depth distance z.sub.i;
(21) obtaining the predicted brightness g.sup.−1(z.sub.i) of the reference point P in the detection image according to equation 2 and the depth distance z.sub.i;
(22) comparing the predicted brightness g.sup.−1(z.sub.i) of the reference point P with the actual pixel brightness img(P.sub.i) of the reference point P to obtain a correction factor k.sub.i, and recording it as:
(23)
(24) obtaining a mean value
(25) calibrating all pixels in the detection image with the mean value
z(x,y)=g(
(26) calculating the scale r of each pixel in the detection image according to equation 3 and the depth image z(x, y);
(27) obtaining the pixel coordinates S.sub.i of each pixel point in the detection image, and calculating the actual two-dimensional coordinates S.sub.i′ of each pixel in the detection image by the scale r;
(28) integrating to obtain the actual three-dimensional coordinates (S.sub.i′, z(x, y)) of each pixel;
(29) calculating or measuring the distance between any two pixels in the detection image or the area within any range.
(30) In the above method, calibration is first performed in the calibration stage. The first step is to obtain the relationship between the relative angle θ of the calibration point Q′ relative to the optical axis of the photographing unit 102 and the pixel distance Δq′ from the imaging point Q to the center of the calibration image. The second step is to obtain the relationship between the brightness φ of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the board 104 where the photographing unit 102 is disposed. The third step is to obtain the relationship between the scale r of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the board. Then, in the actual measurement stage, determining a region in the detection image where the digestive tract is in contact with the transparent enclosure. The target point P′ in the contact region is both a point on the transparent enclosure and a point within the digestive tract. Therefore, the relative angle θ of the target point P′ relative to the optical axis of the photographing unit 102 can be obtained through the reference point P of the target point P′ in the detection image. Then the actual distance from the target point P′ to the board 104 can be calculated through the structure of the imaging system, where the actual distance is the depth distance z.sub.i. Then the predicted brightness g.sup.−1(z.sub.i) of the reference point P can be calculated. Then, the predicted brightness g.sup.−1(z.sub.i) of the reference point P is compared with the actual pixel brightness img(P.sub.i) of the reference point P to obtain the correction factor k.sub.i. After obtaining the correction factor k.sub.i, all pixels in the detection image are corrected to obtain the predicted brightness of each pixel, so as to obtain the depth distance z(x, y) from each pixel to the imaging system. Finally, the actual two-dimensional coordinates S.sub.i′ on the xoy plane of each pixel are obtained through the scale. The information described above is integrated to obtain the actual three-dimensional coordinates (S.sub.i′, z(x, y)) of each pixel. Therefore, after knowing the actual three-dimensional coordinates of each pixel, the distance between any two pixels in the detection image or the area within any range can be calculated.
(31) Moreover, assuming that the transparent enclosure is in contact with the digestive tract, the target point P′ in the digestive tract is the target point P′ on the transparent enclosure, so that the actual coordinates of pixels in the captured image can be directly obtained according to the internal structure of the imaging system, with no need for the structure of other components, and the overall structure of the imaging system is relatively simple.
(32) The actual pixel brightness img(P.sub.i) of the reference point P is the brightness of the pixel at point P in the detection image. Since the form of function g can be related to the reflection coefficient of the object surface, exposure parameters, media environment, number of LEDs, distribution, camera lens performance of the photographing unit 102 and the response of image sensor of the photographing unit 102, although the relationship between the brightness φ of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the board where the photographing unit is disposed is obtained during calibration, when the actual distance z.sub.i and the predicted brightness g.sup.−1(z.sub.i) are obtained in subsequent process, it still needs to be compared with the actual pixel brightness of the reference point P to obtain a correction factor k.sub.i to correct the actual brightness of other pixels to obtain the depth distance z of said other pixels.
(33) In addition, in the final calculation process, two pixels or any area can be selected manually or by a system, and then measured by the system; or, the system provides a scale, and the values are directly read or measured manually.
(34) Therefore, each imaging system needs to first enter the calibration stage to measure and determine different parameters of the imaging system. Accordingly, even though there is difference between imaging systems, different parameters of the imaging system can be obtained in the process of calibration, and the parameters are needed for measurement and calculation in the subsequent process, so as to avoid errors due to difference in equipment.
(35) It should be noted that, in the embodiment, when the photographing unit 102 takes an image, the optical axis of the photographing unit 102 is through the center of the image, and the connecting line between the optical axis and the center of the image is recorded as a reference line, that is the direction of z axis. Therefore, the depth distance from the reference point P in the image to the imaging system does not refer to a linear distance between the two, but refers to the distance between the two in line with the direction of the reference line. In addition, the image taken by the imaging system is preset with a two-axis coordinate system recorded as a xoy plane coordinate system. The pixel coordinates S.sub.i and the actual two-dimensional coordinates S.sub.i′ of each pixel are based on the xoy plane. Then the depth image z (x, y) is obtained, and the two is combined into the three-dimensional coordinates.
(36) First of all, during calibration, the imaging system needs to be placed in a calibration box. The calibration box is a dark chamber and ensures opacity to light. The calibration box comprises a fixing frame for fixing the imaging system, a target board for the imaging system to take images thereon, and the calibration box is also filled with a simulation medium that can be simulated digestive liquid or air. The imaging system can move on the fixing frame. The imaging system further comprises a plurality of LEDs, where the LEDs and the photographing unit 102 are both arranged on the inner board of the capsule endoscope, and the number of the LEDs is set to be 2 to 5, and they are distributed around the photographing unit 102. Therefore, the imaging system can be set to take images at different positions, under different lighting conditions, in different simulation media and on different target boards to obtain the parameter information. The target board can also be replaced, such as a hard board simulating the mucosal surface or imitating the color of mucosa. When the calibration box is used for other calibrations, only the target board needs to be replaced with a whiteboard, a chess board or a line pairs card, so that white balance correction, camera parameter calibration, resolution measurement and other calibrations can be performed. The light field distribution of the LEDs can affect the distribution of the brightness φ of any pixel in the calibration image, so each different imaging system must be calibrated separately.
(37) Moreover, during calibration, after each image is obtained, a radial distortion correction for the image is required. This is because the capturing of image can be affected by the distortion parameters of different cameras. Therefore, distortion correction can improve the accuracy of size calculation of objects on the image, especially the measurement of objects at the edge of the image. The image with radial distortion correction can be calibrated to obtain parameter information. The specific information of radial distortion correction can be described in detail later.
(38) In the measurement stage, as the correction factor is obtained, all pixels in the image can be calibrated and the actual depth distance z (x, y) from the actual position of each pixel in the digestive tract to the board 104 can be obtained. Due to different photographing environments of the imaging system and different positions in the digestive tract, the correction factor can be affected accordingly. Specifically, the digestive tract comprises a plurality of regions and the imaging system comprises a plurality of exposure levels according to different photographing environments. So, after the step “obtaining the mean value
(39) If two or more correction factors k are obtained at the same exposure level and in the same digestive tract region, the average of the correction factors k should be calculated before storing and updating. Specifically, as shown in table 1, the digestive tract regions include esophagus, small intestine, large intestine, etc., the exposure levels include 1, 2, . . . to N, and different exposure levels and digestive tract regions are stored with different correction factors k. Therefore, if the reference point P is not obtained to calculate the correction factor k, the corresponding correction factor can also be selected for calculation from the table below according to the exposure level and digestive tract region.
(40) TABLE-US-00001 TABLE 1 Digestive tract regions Exposure Small Large levels Esophagus intestine intestine 1 k11 k12 k13 2 k21 k22 k23 . . . . . . . . . . . . N kN1 kN2 kN3
(41) As described above, the premise of the present invention is to assume that the inner wall of the digestive tract is in contact with the transparent enclosure and analyze the target point P′ in the contact region. Therefore, how to determine the contact region in the detection image is a difficult point. The step “determining the region in the detection image where the transparent enclosure contacts with the digestive tract wall” comprises:
(42) selecting the edge part of the detection image away from the center of the detection image;
(43) obtaining the brightness T of each pixel in the edge part;
(44) gathering the pixels in a region which the brightness T is greater than a threshold τ as the contact region.
(45) Obviously, due to the concentrated propagation of light beams emitted from the LEDs of the imaging system and the light reflection on the inner wall of the digestive tract, there is a clear brightness step difference in the images taken by the imaging system. When the target is closer to the transparent enclosure, the brightness of the target in the corresponding image is higher, and when the target is farther from the transparent enclosure, the brightness of the target in the corresponding image is lower.
(46) Therefore, the photographing environments in the digestive tract can be simulated in the early simulation experiment stage to calculate the brightness distribution of the contact region, so as to derive the threshold τ. The region composed of all pixels greater than the threshold τ can be regarded as the contact region.
(47) In addition, determination of the contact region by the threshold τ may have an error. For example, omitting some contact points results in a determination with omission, or treating some misjudged points that are not in contact but are closer as contact points results in a misjudgment. But the determination with omission can basically not cause a calculation error. If a misjudgment occurs, the misjudged points that are not actually contacted are determined as contact points, which may cause an error. However, the range of the above error cannot be very large and can be ignored, mainly due to the following three reasons. First, the threshold τ can be a relatively large value, and under the threshold τ, the above determination is more likely to be a determination with omission but not misjudgment. Second, even if the misjudged points do not contact with the transparent enclosure, they are usually very close to the transparent enclosure, so the above error can be ignored. Third, in the above steps, at least one reference point P can be selected in the contact region, and in the actual operation process, in order to make the mean value
(48) Further, in the above steps, the edge part of the detection image away from the center of the detection image should be selected. Due to digestive tract peristalsis, the transparent enclosure of the imaging system usually contacts with the inner wall of the digestive tract at its edge part. Therefore, the contact region is usually formed at the edge port of the detection image. Specifically, the step “selecting the edge part of the detection image away from the center of the detection image” comprises:
(49) marking an inner ring on the detection image that is centered on the center of the detection image, and the inner ring is close to the edge of the detection image and does not intersect;
(50) marking an outer ring on the detection image that is centered on the center of the detection image, and the outer ring intersects with the edge of the detection image;
(51) recording the part enclosed by the inner ring, the outer ring, and the image edge as the edge part.
(52) Therefore, the range of the edge part can be determined, and the range cannot be too large, causing difficulty in determination of the threshold τ, nor too small, causing difficulty in selecting the reference point P. The radius of the inner ring and the outer ring is determined by the size of the detection image.
(53) As described above, during calibration, to ensure image accuracy, after each image is obtained, a radial distortion correction for the image is required. Therefore, in the specific implementation process of the present invention, a radial distortion correction is also required for the captured images. Specifically, the step “capturing and obtaining a detection image” comprises:
(54) controlling the imaging system to capture and obtain an image;
(55) correcting the radial distortion of the captured image and forming a detection image, and recording it as:
img_out(x,y)=img_in(x(1+l.sub.1R.sup.2+l.sub.2R.sup.4),y(1+l.sub.1R.sup.2+l.sub.2R.sup.4)) (7);
(56) where, R=√{square root over (x.sup.2+y.sup.2)} represents the pixel distance from the pixel to the center of the detection image, l.sub.1 and l.sub.2 represent distortion parameters of the imaging system, x represents x-coordinate of the pixel, y represents y-coordinate of the pixel, img_in represents input image, and img_out represent corrected image.
(57) Therefore, after correcting the radial distortion of the captured image, the detection image is obtained, and then an edge part is selected on the detection image to obtain the contact region and the reference point P.
(58) Further, as shown in
(59) obtaining the radius R of the front enclosure 103 of the transparent enclosure; calculating the depth distance R cos θ from the target point P′ to the spherical center of the front enclosure 103 separately;
(60) obtaining the axial length H of the annular enclosure of the transparent enclosure; calculating the depth distance z.sub.i=R cos θ+H from the target point P′ to the board 104 separately.
(61) The coordinates of the target point P′ in the z-axis direction is R cos θ+H.
(62) Further, to facilitate subsequent calculation in detail, the actual coordinates of the target point P′ in the xoy plane can also be calculated. Specifically, the distance between the target point P′ and the point A in the xoy plane is R sin θ. Referring to
(63) However, even if the depth distance from the actual position of each pixel in the digestive tract to the board 104 is obtained, if the object is too far from the capsule endoscope, the captured image can be too dark, and a large error can be easily caused in the depth distance z calculated according to the image brightness in equation 2; moreover, the image may be blurred with reduced resolution and larger noise, and the calculation error of the depth distance z becomes greater. Therefore, an effective section of the detection image must be defined, and only the detection image within the effective section can be used for measurement and calculation.
(64) Therefore, after obtaining the depth distance or the depth image, it is necessary to compare the depth distance z. Specifically, the step “obtaining the depth distance from the actual position of each pixel in digestive tract to the board 104” or “integrating to obtain the actual three-dimensional coordinates (S.sub.i′, z(x,y)) of each pixel” further comprises:
(65) determining the value of the depth distance z of each pixel;
(66) when t.sub.1≤z≤t.sub.2, it is determined that the pixel is within the effective section of the detection image;
(67) when z<t.sub.1 or z>t.sub.2, it is determined that the pixel is within the ineffective section of the detection image.
(68) where, t.sub.1=0, and t.sub.2=60 mm.
(69) Therefore, after the final step “calculating or measuring the distance between any two pixels in the detection image or the area within any range”, any two pixels or any range taken are also within the effective section of the detection image.
(70) Specifically, in a first interaction mode, a straight-line distance between any two pixels selected by a user in the effective section can be calculated according to the three-dimensional coordinates of the two pixels.
(71) Or, in a second interaction mode, a three-dimensional image of any region can be built according to the three-dimensional coordinates of pixels in the region selected by a user in the effective section, and a straight-line distance between any two pixels selected by the user from the three-dimensional image can be calculated.
(72) Or, in a third interaction mode, the area of any region selected by a user in the effective section can be calculated according to the three-dimensional coordinates of the region.
(73) Or, in a fourth interaction mode, a scale is formed in the effective section, and the graduations on the scale are marked as those of actual length, users can place the scale at different positions, and the graduations of the scale at different positions are also different, then users can read and measure by themselves.
(74) Or, in a fifth interaction mode, the lesion region in the effective section can be automatically identified, with the size or area of the region calculated.
(75) The above step “calculating the distance between any two pixels in the detection image or the area within any range” is not limited to have only the five interaction modes, but the calculation method is based on that the actual three-dimensional coordinates of each pixel have been obtained, so other interaction modes are also within the protection scope of the present invention.
(76) Therefore, accordingly, the present invention further provides a measuring system for objects in the digestive tract based on an imaging system, comprising: an identification module, configured to identify the contact region and the reference point P;
(77) a calibration calculation module, configured to calculate the relationship between the relative angle θ of the calibration point Q′ relative to the optical axis of the photographing unit and the pixel distance Δq′ from the imaging point Q to the center of the calibration image and record as equation 1, and to calculate the relationship between the brightness φ of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the imaging system, and record it as equation 2, and to calculate and determine the relationship between the scale r of any pixel in the calibration image and the depth distance z from the actual position of the pixel in the simulated digestive tract to the imaging system, and record it as equation 3; a brightness detection module, configured to identify the brightness of any pixel or all pixels in the calibration image or the detection image;
(78) a measurement calculation module, configured to obtain the equation 1 of the calibration calculation module and the pixel distance Δp from the reference point P to the center of the detection image to calculate the relative angle θ of the target point P′ relative to the optical axis of the photographing unit 102, and to calculate actual distance from the target point P′ to the board and record it as depth distance z.sub.i, and to obtain the depth distance z.sub.i, the equation 2 of the calibration calculation module and the actual pixel brightness of the reference point P to calculate the correction factor k.sub.i, and to obtain the equation 2 to calculate and obtain the depth distance z(x, y) from the actual position of each pixel in the digestive tract to the board, and to obtain the equation 3 to calculate the actual two-dimensional coordinates S.sub.i′, and integrate to obtain the actual three-dimensional coordinates of (S.sub.i′, z(x, y)) of each pixel.
(79) In summary, the method for measuring objects in the digestive tract based on the imaging system in the present invention can obtain some parameter information in advance through the calibration stage of the imaging system, and thereby facilitate the calculation in the measurement stage and avoid calculation error caused by equipment difference between imaging systems. Secondly, by the storage of correction factors k.sub.i, after the storage amount of k.sub.i is getting larger and the value of k.sub.i is getting more stable, the correction factor k.sub.i may not be calculated in the subsequent photographing process, so the use of a distance measuring unit can be ignored. Moreover, by determining the contact region, the reference point in the captured image can directly correspond to the target point on the transparent enclosure of the imaging system, so that no other hardware is needed to measure the depth distance z.sub.i for reference point, making the components simpler and the calculation steps more concise.
(80) Finally, through separate measurement in different digestive tract environments in the calibration stage, different processing methods can be selected for different digestive tract environments to improve accuracy.
(81) It should be understood that, although the specification is described in terms of embodiments, not every embodiment merely comprises an independent technical solution. Those skilled in the art should have the specification as a whole, and the technical solutions in each embodiment may also be combined as appropriate to form other embodiments that can be understood by those skilled in the art.
(82) The present invention by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims.