Detection system
10755417 ยท 2020-08-25
Assignee
Inventors
Cpc classification
H04N5/2226
ELECTRICITY
G02B7/36
PHYSICS
H04N23/69
ELECTRICITY
G06V40/28
PHYSICS
International classification
G02B7/36
PHYSICS
Abstract
The present disclosure provides a detection system, which includes an image sensor, a lens device, and a processor. The image sensor is configured to take a first picture of a foreground object and a background object. The lens device is attached to the image sensor and configured to allow the foreground object to form a clear image on the first picture and the background object to form a blurred image on the first picture. The processor is configured to determine the image of the foreground object by analyzing the sharpness of the images of the first pictures.
Claims
1. A detection system, comprising: a light source configured to illuminate an object; an image sensor configured to receive a reflected light from the object, generate a first picture based on a first length time of exposure with the light source turned on, and generate a second picture based on a second length time of exposure with the light source turned on, wherein the first length time is different from the second length time; and a processor configured to determine an image of the object according to the following expression:
Object Image=[(Image1*N)Image2]/(N1) where Object Image represents the image of the object; Image1 represents the first picture; Image2 represents the second picture; and N represents a ratio of the second length time to the first length time, wherein N is not equal to one.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention will be described according to the appended drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION OF DISCLOSED EMBODIMENTS
(14) The following description is presented to enable any person skilled in the art to make and use the disclosed embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosed embodiments. Thus, the disclosed embodiments are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
(15)
(16)
(17) The object 16 can be any physical object, which is not limited to a hand illustrated in the present embodiment.
(18) The image sensor 12 may be a CMOS image sensor, CCD image sensor, or the like. The image sensor 12 can capture images at a high frame rate, such as 960 fps.
(19) Referring to
(20) The frequency of the light source 14 can be matched to the frame rate of the image sensor 12. As such, the object images intermittently appear in successively generated pictures.
(21)
(22) In Step S42, the image sensor 12 generates at least one first picture (P1) when the light source 14 is turned on, wherein the at least one first picture (P1) may comprise the image formed by the light of the light source 14 reflected from the object 16, the ambient light noise caused by the environmental light, and the image formed by the background object 18 illuminated by environmental light. In Step S44, the image sensor 12 generates at least one second picture (P2) when the light source 14 is turned off. Since the light source 14 is turned off, the at least one second picture (P2) does not include the image formed by the light of the light source 14 reflected from the object 16 while still including the ambient light noise caused by the environmental light and the image formed by the background object 18 illuminated by environmental light. In Step S46, the processor 22 subtracts the at least one second picture from the at least one first picture (P1P2) to obtain a subtraction picture. The subtraction of the at least one second picture from the at least one first picture can remove the ambient light noise caused by the environmental light and the background image formed due to environmental light. As a result, the processor 22 can easily determine the object image created by the object 16 from the subtraction picture.
(23) In some embodiments, the first picture comprises a plurality of pixels, and the second picture comprises a plurality of pixels corresponding to the pixels of the first picture, wherein the subtraction of the at least one second picture from the at least one first picture is performed by subtracting pixel data of each pixel of the second picture from pixel data of the corresponding pixel of the first picture.
(24) In some embodiments, the pixel data may be of grey scale intensity. In some embodiments, the pixel data may be of one RGB component or a combination of at least two RGB components. In some embodiments, the pixel data may be of one HSV component or a combination of at least two HSV components. In some embodiments, the first and second pictures can be continuously generated.
(25) In some embodiments, the processor 22 is configured to determine the position of the object image in the subtraction picture. In some embodiments, the processor 22 is configured to generate coordinate data according to the position of the object image.
(26) In some embodiments, the image sensor 12 generates a plurality of first pictures when the light source 14 is turned on. The processor 22 calculates a plurality of subtraction pictures by subtracting the second picture from each first picture.
(27) In some embodiments, the processor 22 can determined a distance between the object 16 and the image sensor 12 by a dimension of the object image measured from the subtraction picture. In some embodiments, the processor 22 can determine the change of distance between the object 16 and the image sensor 12 by the change of dimension measured from the subtraction picture. In some embodiments, the processor 22 can determine the gesture performed by the object 16 by the change of positions of the object image of the plurality of subtraction pictures. In some embodiments, the processor 22 can determine a distance or the change of distance between the object 16 and the image sensor 12 by the change of intensity of the object images in the plurality of subtraction pictures.
(28) In some situations, after two pictures are subtracted, noises cannot be completely removed. At this moment, the detection system 1 may use more pictures to remove the interferences in the determination of the object image. In some embodiments, the image sensor 12 generates two first pictures when the light source 14 is turned on, and generates one second picture when the light source 14 is turned off. The processor 22 equalizes the two first pictures to obtain an average picture. The processor 22 then subtracts the second picture from the average picture. In some embodiments, the image sensor 12 generates one first picture when the light source 14 is turned on, and generates two second pictures when the light source 14 is turned off. The processor 22 equalizes the two second pictures to obtain an average picture. The processor 22 then subtracts the average picture from the first picture. In some embodiments, the method of using two first pictures and one second picture or using one first picture and two second pictures to remove the interferences in the determination of an object image can be applied with an image sensor 12 having a high frame rate of at least 960 fps such that an improved removal effect can be achieved.
(29) Above all, other methods of removing the interferences caused by environmental light are provided below.
(30)
(31)
(32)
(33)
(34)
(35)
Object Image=(Image1NImage2)/(N1)(3)
(36) In some situations, the detection system 1 can utilize more pictures to remove the interferences affecting the determination of an object image. In some embodiments, the image sensor 12 generates a plurality of first pictures (I.sub.1, I.sub.3, . . . ) when the light source 14 is turned on, and generates a plurality of second pictures (I.sub.2, I.sub.4, . . . ) when the light source 14 is turned off, wherein the image sensor 12 alternately generates the first and second pictures (I.sub.1, I.sub.2, I.sub.3, . . . I.sub.N+3). The processor 22 uses the following equations (4) to (6) to calculate a computed picture (I.sub.computed)
(37)
(38) where N is a positive integer, and the absolute value of |.sub.i1| (|.sub.1|, . . . , |.sub.N+3|) are binomial coefficients.
(39) For example, in some embodiments, when N is equal to one, the image sensor 12 alternately generates two first pictures (I.sub.1 and I.sub.3) and two second pictures (I.sub.2 and I.sub.4). At this instance, .sub.i can be either (1, 3, 3, 1) or (1, 3, 3, 1), and the computed picture (I.sub.computed) can be:
(40)
(41) In some embodiments, when N is two, .sub.i can be either (1, 4, 6, 4, 1) or (1, 4, 6, 4, 1).
(42) Referring back to
(43)
(44) In some embodiments, the background model can be updated, and the following equation (9) can be used for updating.
B.sub.i,j.sup.new=.sub.i,jB.sub.i,j.sup.old+(1.sub.i,j)P.sub.i,j(9)
(45) where B.sub.i,j.sup.old is pixel data of a pixel (i, j) of the original background model, .sub.i,j is a weight number, P.sub.i,j is pixel data of a pixel (i, j) of a subtraction picture (I.sub.obj), and B.sub.i,j.sup.new is pixel data of a pixel (i, j) of an updated background model.
(46) In some embodiments, the processor 22 can use the object images of pictures to update the background model.
(47) In some embodiments, each pixel of the background model corresponds to the same weight number. In some embodiments, each pixel of the background model corresponds to a different weight number. In some embodiments, a portion of pixels of the background model correspond to the same weight number.
(48) The weight number related to at least one pixel of the background model is adjustable. In some embodiments, when the processor 22 is updating the background model, the processor 22 may compare B.sub.i,j.sup.old with P.sub.i,j. When the difference between B.sub.i,j.sup.old and P.sub.i,j is greater than a predetermined value, .sub.i,j can be adjusted higher such that the updated background model will not be changed significantly. In one embodiment, when the difference between the pixel data of a pixel of an object image of a picture and the pixel data of a corresponding pixel of the background model is greater than a predetermined value, the processor 22 may adjust the weight number corresponding to the pixel of the object image.
(49) In some embodiments, the image sensor 12 of the detection system 1 generates a plurality of pictures when the light source 14 is turned on and off. The processor 22 calculates a plurality of subtraction pictures using the pictures. The processor 22 determines the object image of each subtraction picture by a background model. If the processor 22 determines that the object images of the subtraction pictures are at different positions (i.e. the object is moving when the pictures are generated), the processor 22 will not update the background model with the subtraction pictures. If the processor 22 determines that the positions of the object images of the pictures are almost unchanged or the object images do not move, the processor 22 will use at least one subtraction picture to update the background model.
(50) If the positions of two object images are not changed, it could mean that the two object images are located at the same position or the difference between the points, for example the center of gravity, representing the two object images along a direction is not greater than a percentage, for example 20%, of the width of the object image along the direction.
(51)
(52)
(53) Amplitude, variance, or other methods can be applied to evaluate the sharpness of the images of pictures. The details can refer to a paper by Chern N. K. et al., entitled PRACTICAL ISSUES IN PIXEL-BASED AUTOFOCUSING FOR MACHINE VISION, Proceedings of the 2001 IEEE International Conference on Robotics & Automation, Seoul, Korea, May 21-26, 2001.
(54) The embodied detection system can use different methods to remove the interferences caused by the background so that the determination of object images can be more accurate.
(55) The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a non-transitory computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the non-transitory computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code stored within the non-transitory computer-readable storage medium. Furthermore, the methods and processes described below can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
(56) It will be apparent to those skilled in the art that various modifications can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with the true scope of the disclosure being indicated by the following claims and their equivalents.