Surgical camera system with high dynamic range
11595589 · 2023-02-28
Assignee
Inventors
Cpc classification
H04N23/555
ELECTRICITY
H04N23/741
ELECTRICITY
International classification
Abstract
An endoscopic camera device having an optical assembly; a first image sensor in optical communication with the optical assembly, the first image sensor receiving a first exposure and transmitting a first low dynamic range image; a second image sensor in optical communication with the optical assembly, the second image sensor receiving a second exposure and transmitting a second low dynamic range image, the second exposure being higher than the first exposure; and a processor for receiving the first low dynamic range image and the second low dynamic range image; wherein the processor is configured to combine the first low dynamic range image and the second dynamic range image into a high dynamic range image using a luminosity value derived as a preselected percentage of a cumulative luminosity distribution of at least one of the first low dynamic range image and the second low dynamic range image.
Claims
1. An endoscopic camera device comprising: an optical assembly; a first image sensor in optical communication with the optical assembly, the first image sensor receiving a first exposure and transmitting a first low dynamic range image; a second image sensor in optical communication with the optical assembly, the second image sensor receiving a second exposure and transmitting a second low dynamic range image, the second exposure being higher than the first exposure; and a processor coupled to the first image sensor and the second image sensor for receiving the first low dynamic range image and the second low dynamic range image; wherein the processor is configured to combine the first low dynamic range image and the second dynamic range image into a high dynamic range image using a luminosity value derived as a preselected percentage of a cumulative luminosity distribution of at least one of the first low dynamic range image and the second low dynamic range image; and wherein the processor is further configured to generate a luminosity histogram and to analyze the luminosity histogram to determine a cumulative distribution of pixel values.
2. The endoscopic camera device of claim 1 wherein the processor is further configured to generate the histogram from a highest luminosity value of each pixel of the first low dynamic range image.
3. The endoscopic camera device of claim 1 wherein the processor is further configured to generate the histogram from the green value of each pixel of the first low dynamic range image.
4. The endoscopic camera device of claim 1 wherein the processor selects a luminosity value representing a preselected percentage of the cumulative luminosity distribution of between about 50 percent and about 90 percent.
5. The endoscopic camera device of claim 1 wherein the processor selects a luminosity value representing a preselected percentage of the cumulative luminosity distribution of between about 60 percent and about 80 percent.
6. The endoscopic camera device of claim 1 wherein the processor selects a luminosity value representing a preselected percentage of the cumulative luminosity distribution of about 70 percent.
7. The endoscopic camera device of claim 1 wherein the first image sensor receives from about 10 percent to about 20 percent of the visible light and the second image sensor receives from about 80 percent to about 90 percent of the visible light.
8. The endoscopic camera device of claim 1 wherein the processor selects a luminosity value from the second low dynamic range image.
9. A method of generating a high dynamic range image comprising the steps of: receiving a first low dynamic range image having a first exposure; receiving a second low dynamic range image having a second exposure, the second exposure being higher than the first exposure; generating a luminosity histogram from at least one of the first dynamic range image and the second low dynamic range image; analyzing the luminosity histogram to determine a cumulative distribution of pixel values; determining a pixel value corresponding to a predetermined percentage of the cumulative distribution of pixel values; using the determined pixel value to calculate a blending parameter; and blending the first low dynamic range image and second low dynamic range image into a high dynamic range image using the blending parameter.
10. The method of claim 9 wherein the predetermined percentage of the cumulative distribution of pixel values is between about 50 percent and about 90 percent.
11. The method of claim 9 wherein the predetermined percentage of the cumulative distribution of pixel values is between about 60 percent and about 80 percent.
12. The method of claim 9 wherein the predetermined percentage of the cumulative distribution of pixel values is about 70 percent.
13. The method of claim 9 wherein the first low dynamic range image is used to generate the luminosity histogram.
14. The method of claim 9 wherein the second low dynamic range image is used to generate the luminosity histogram.
15. The method of claim 9 wherein the luminosity histogram is generated from a highest luminosity value of each pixel.
16. The method of claim 9 wherein the luminosity histogram is generated from the green value of each pixel.
17. An endoscopic camera device comprising: an optical assembly; a first image sensor in optical communication with the optical assembly, the first image sensor receiving a first exposure and transmitting a first low dynamic range image; a second image sensor in optical communication with the optical assembly, the second image sensor receiving a second exposure and transmitting a second low dynamic range image, the second exposure being higher than the first exposure; and a processor coupled to the first image sensor and the second image sensor for receiving the first low dynamic range image and the second low dynamic range image; wherein the processor is configured to: generate a luminosity histogram from a highest luminosity value of each pixel of the first low dynamic range image; analyze the luminosity histogram to determine a cumulative luminosity distribution of pixel values; determine a luminosity value corresponding to a predetermined percentage of the cumulative luminosity distribution of pixel values; and combine the first low dynamic range image and the second dynamic range image into a high dynamic range image using the determined luminosity value.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims and accompanying figures wherein:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) In the following description of the preferred implementations, reference is made to the accompanying drawings which show by way of illustration specific implementations in which the invention may be practiced. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. It is to be understood that other implementations may be utilized and structural and functional changes may be made without departing from the scope of this disclosure.
(8) The present disclosure is directed to a system and method for producing a high dynamic range image using a weighted sum of two low dynamic range images. The first low dynamic range image has a lower exposure than the second low dynamic range image. The summing weights are monotonic functions of one of the low dynamic range images, and the monotonic functions are characterized by a single parameter. The single parameter is chosen automatically by pre-selecting a fixed percentage of the cumulative sum distribution of one of the low dynamic range images.
(9) With reference to
(10) An image sensor module 28 may be positioned inside the shaft 14 and proximal to a distal tip 30 of the shaft 12. Additionally, the camera 12 may be coupled to a light source 36. The light source 36 may be inside of the camera 12.
(11) The light source 36 includes a lamp. The lamp may be, for example, a semiconductor light source such as laser or LED to illuminate the field of view. The light source 36 is configured to appropriately illuminate the field of view of the video camera. Further, the light generated as well as camera sensitivity may extend beyond the visible spectrum. The illumination may be intended to excite fluorescence directly in a target, or in a fluorescent substance such as indocyanine green, that is then sensed by the camera. For example, the light source 36 might produce illumination in the near infrared (NIR) range and the camera sense the fluorescence at a longer IR wavelength. The illumination and camera sensitivity could extend from UV to NIR continuously or be composed of separate narrow bands.
(12) Referring to
(13) An image processor 42 controls and processes the output from the image sensing module 28. Although other controllers and processors may be used to control and process the output from the image sensing module 28, use of one or more FPGAs for processing video images allows the system to achieve precise timing to generate a standard video output signal. User interface logic and possible external network connectivity may be performed by software running on the processor 38.
(14) In an implementation, analog RGB data is transmitted from the image sensor module 28 to the camera controller 20. The Analog RGB data passes through an Analog/Digital converter 44 to the image processor 42 where the video is processed. The processed video is then passed to a video output that may include a formatter FPGA 46 where the video is formatted into various display formats. The formatter FPGA 46 may also overlay information, such as patient and/or doctor information, onto the video. The formatted video may be converted to an analog signal for display. The formatted video is sent to the display 24 and/or the storage device 26. Alternatively, an Analog/Digital converter may be located in the camera head and digital RGB data transmitted from the camera head 12 to the camera controller 20. Additionally, the image sensors may include Analog/Digital converters.
(15) The camera controller 20 issues commands to the camera 12 to adjust its operating characteristics, and the camera 12 may send confirmation to the camera controller 20 that the camera received the commands. The image processor 42 and/or the processor 38 may communicate with a shutter driver either in the camera controller 20 or the camera 12 to control an exposure period of the image sensing module 28. Additionally, the image processor 42 and/or the processor 38 communicates with the light source 32 to control the drive current to the lamp of the light source 32.
(16) A schematic diagram of the image sensor module 28 according to an implementation is shown in
(17) In an implementation, the image sensors receive differential amounts of light. The optical element 50 may direct light so that the first image sensor 52 receives a lower exposure, and is therefore a low exposure sensor that generates a low exposure image, and the second image sensor 54 receives a higher exposure, and is therefore a high exposure sensor that generates a high exposure image. In an implementation, the optical element directs between about 10% and about 40% of light to the first image sensor 52 and between about 60% to about 90% of light to the second image sensor 54. In an implementation, the optical element directs between about 10% and about 20% of light to the first image sensor 52 and between about 80% to about 90% of light to the second image sensor 54. In an implementation, the optical element directs about 10% of light to the first image sensor 52 and about 90% of light to the second image sensor 54.
(18) Alternatively, the first image sensor 52 may receive a higher exposure and the second image sensor 54 may receive a lower exposure. Each of the first image sensor 52 and the second image sensor 54 generate relatively low dynamic range images. The images from both sensors are combined and to create a single image with a high dynamic range.
(19) Key issues are how to select which pixels of each low dynamic range image are used to create a combined high dynamic range image and how those pixels are blended together. Rather than manually selecting blending parameters or automatically blending the images based on an arbitrary blending parameter, blending parameters are chosen based on scene content by pre-selecting a fixed percentage of the cumulative luminosity distribution of at least one of the low dynamic range images.
(20)
(21) Each pixel of a color image may comprise three values. In an implementation, the three values represent red, green, and blue. In an implementation, the guiding function is generated by finding the largest of the three values of each pixel regardless of whether the largest value is red, green or blue. The luminosity histogram is then generated from the guiding function. The possible luminosity values are grouped into bins. In an implementation, the luminosity values are grouped into 256 different bins. The horizontal axis of the histogram 58 represents bins of different luminosity values and the vertical axis represents the total number of pixels within each particular luminosity value bin.
(22) Alternatively, the histogram 58 may be generated by considering only the green pixel values of the image. Alternatively, the histogram 58 may be generated using a weighted average of red, green, and blue colors at each pixel. In an implementation, the luminosity value for each pixel is calculated using the formula 0.3 (Red)+0.59 (Green)+0.11 (Blue).
(23) In a distribution analysis step 106, the histogram is analyzed to determine a cumulative distribution of pixel luminosity values, as shown in graph 60. In an analysis step 108, the cumulative distribution of pixel luminosity values is analyzed to determine a corresponding pixel value (Px) at a preselected percentage of luminosity values. The luminosity value percentage is preselected so that a corresponding pixel value may be found automatically. However, the corresponding pixel value will change depending on the overall scene luminosity distribution and this allows the system and method to automatically adapt to different scenes.
(24) The percentage is preselected to prevent dynamic range gaps between the different images to be combined, and also so that most useful pixels that should have an influence on the final high dynamic range image are not marginalized. In an implementation, the pre-determined percentage of pixels used from a low exposure image is between about 50 and about 90 percent, and more preferably between about 60 and about 80 percent, and more preferably about 70 percent.
(25) As seen in
(26) The two low dynamic range images are combined using the formula:
I.sub.HDR(x,y;c)=W.sub.hi(x,y;c)I.sub.hi(x,y;c)+W.sub.lo(x,y;c)I.sub.lo(x,y;c)=e.sup.kf.sub.lo.sup.(x,y;c)I.sub.hi(x,y;c)+(1−e.sup.kf.sub.lo.sup.(x,y;c)I.sub.lo(x,y;c).
(27) I.sub.HDR is the high dynamic range image;
(28) W.sub.hi is the weighting function for the high exposure image;
(29) I.sub.hi is the value of a given pixel in the high exposure image;
(30) W.sub.lo is the weighting function for the low exposure image;
(31) I.sub.lo is the value of a given pixel in the low exposure image;
(32) x and y are the pixel coordinates
(33) c∈{R,G,B]
(34) The negative blending parameter k controls the position of the transition point between the two regions where each image dominates.
(35) f.sub.lo is a guiding function that depends on I.sub.lo.
(36) Blending is done pixel by pixel for each color separately to generate a single blended image with a high dynamic range. Other implementations may use color spaces other than RGB in which to perform the blending, for example, YCbCr.
(37) In an additional implementation the high exposure image may be used to calculate the blending parameter using the following equations:
I.sub.HDR(x,y;c)=W.sub.hi(x,y;c)I.sub.hi(x,y;c)+W.sub.lo(x,y;c)I.sub.lo(x,y;c)=e.sup.kf.sub.hi.sup.(x,y;c)I.sub.hi(x,y;c)+(1−e.sup.kf.sub.hi.sup.(x,y;c)I.sub.lo(x,y;c).
(38) The variables are the same, but the guiding function f.sub.hi depends on I.sub.hi instead of I.sub.lo.
(39) In a calculation step 110, the corresponding pixel value (Px) is used to calculate the blending parameter (k). In an implementation the blending parameter (k) is calculated using the formula k=ln(0.50)/Px.
(40) In a preferred implementation, the weighting function for the high exposure image, W.sub.hi, is e.sup.kf.sub.lo, although other monotonically decreasing functions could be selected. So, in the example shown in
(41) In an implementation a low-pass filter is applied to the guiding function before using it in order to avoid the guiding function reacting to small details. The low pass filter can be expressed as f′.sub.hi(x,y;c)=gaussian (f.sub.hi(x,y;c)), although other low-pass operations may be used. In an implementation the guiding function incorporates an edge function to use blurred and unblurred guiding functions to attenuate the effect over hard edges. The edge function can be expressed as: W.sub.lo(x,y;c)=(e.sup.kf.sub.hi(x,y;c)+e.sup.kf′.sub.hi(x,y;c))/2, although other approaches like multi-level pyramid or wavelet decompositions may be used.
(42) The system and method disclosed herein is advantageous because it is computationally simple compared to other methods and therefore may be carried out faster and with less processing power. Additionally, in systems where a blending parameter is pre-selected and fixed (without an automatic method to select the single parameter such as disclosed herein), then depending on the scene content and the exposure differences between the two low dynamic range images, the produced high dynamic range mage may not be a visually-pleasing image.
(43) There is disclosed in the above description and the drawings, a system and method for making high dynamic range images that fully and effectively overcomes the disadvantages associated with the prior art. However, it will be apparent that variations and modifications of the disclosed implementations may be made without departing from the principles of the invention. The presentation of the implementations herein is offered by way of example only and not limitation, with a true scope and spirit of the invention being indicated by the following claims.
(44) Any element in a claim that does not explicitly state “means” for performing a specified function or “step” for performing a specified function, should not be interpreted as a “means” or “step” clause as specified in 35 U.S.C. § 112.