Methods and systems for coded rolling shutter
09736425 · 2017-08-15
Assignee
- Sony Corporation (Tokyo, JP)
- The Trustees Of Columbia University In The City Of New York (New York, NY)
Inventors
- Jinwei Gu (Rochester, NY, US)
- Yasunobu Hitomi (Kanagawa, JP)
- Tomoo Mitsunaga (Kanagawa, JP)
- Shree K. Nayar (New York, NY)
Cpc classification
H04N25/533
ELECTRICITY
H04N7/002
ELECTRICITY
H04N25/75
ELECTRICITY
International classification
Abstract
Methods and systems for coded rolling shutter are provided. In accordance with some embodiments, methods and system are provided that control the readout timing and exposure length for each row of a pixel array in an image sensor, thereby flexibly sampling the three-dimensional space-time value of a scene and capturing sub-images that effectively encode motion and dynamic range information within a single captured image.
Claims
1. A method for reading an image of a scene detected in an image sensor having an array containing a plurality of pixels, the method comprising: obtaining first image data from a first subset of rows of the array to extract a first sub-image of the scene; obtaining second image data from a second subset of rows of the array to extract a second sub-image of the scene, wherein the first subset of rows is different from the second subset of rows; determining flow information based on a motion of brightness patterns between the first sub-image and the second sub-image or determining the flow information by comparing a position of particular points in the first sub-image and the second sub-image; estimating a point spread function for the image based, at least in part, on the determined flow information; and generating a video based, at least in part, on the first sub-image, the second sub-image, and the estimated point spread function.
2. The method of claim 1, wherein a plurality of sub-images that includes the first sub-image and the second sub-image extracted from the image are uniformly distributed between the plurality of rows of the array.
3. The method of claim 1, wherein determining flow information further comprises estimating the optical flow between the first sub-image and the second sub-image.
4. The method of claim 1, further comprising determining an intermediate image that is interpolated between the first sub-image and the second sub-image.
5. The method of claim 4, wherein the first sub-image, the intermediate image, and the second sub-image are combined to create the video.
6. The method of claim 1, further comprising: receiving a coded pattern for reading the plurality of rows of the array, wherein the coded pattern is an interlaced pattern in which readout time for each row is uniformly distributed into the plurality of sub-images.
7. The method of claim 1, further comprising: receiving a coded pattern for reading the plurality of rows of the array, wherein the coded pattern is a staggered pattern in which an order of reading out a subset of the plurality of rows is reversed for the subset.
8. A method for reading an image of a scene detected in an image sensor comprising an array having a plurality of rows of pixels, the method comprising: obtaining a first image of the scene at a first exposure; determining an adjusted exposure time for each of the plurality of rows of the array based at least in part on scene radiance in the first image of the scene, wherein the adjusted exposure time includes a first exposure time for a first row of the plurality of rows and a second exposure time for a second row of the plurality of rows and wherein the first exposure time is adjusted to be different from the second exposure time; obtaining a second image of the scene using the first exposure time for the first row of the plurality of rows of the array and the second exposure time for the second row of the plurality of rows of the array; and generating a high dynamic range image based on image data from at least the first row and the second row of the plurality of rows.
9. The method of claim 8, further comprising normalizing image data of the second image of the scene with respect to the adjusted exposure time applied to each of the plurality of rows of the array.
10. The method of claim 8, further comprising determining readout times of the plurality of rows of the array.
11. The method of claim 10, further comprising extracting a plurality of sub-images from the image using the determined readout times and the adjusted exposure times, wherein the plurality of sub-images are uniformly distributed between the plurality of rows in the array.
12. The method of claim 11, further comprising: estimating optical flow between a first sub-image and a second sub-image of the plurality of sub-images, wherein estimating the optical flow comprises estimating the optical flow based on a motion of brightness patterns between the first sub-image and the second sub-image or estimating the optical flow by comparing a position of particular points in the first sub-image and the second sub-image; determining motion information based at least in part on the estimated optical flow; and applying the determined motion information to enhance the high dynamic range image that is generated by combining the plurality of sub-images extracted from the image.
13. A system for reading an image of a scene, the system comprising: an image sensor comprising an array having a plurality of rows; and at least one processor programmed to: obtain first image data from a first subset of rows of the array to extract a first sub-image of the scene; obtain second image data from a second subset of rows of the array to extract a second sub-image of the scene, wherein the first subset of rows is different from the second subset of rows; determine flow information based on a motion of brightness patterns between the first sub-image and the second sub-image or determining the flow information by comparing a position of particular points in the first sub-image and the second sub-image; estimate a point spread function for the image based, at least in part, on the determined flow information; and generate a video based, at least in part, on the first sub-image, the second sub-image, and the estimated point spread function.
14. The system of claim 13, wherein a plurality of sub-images that includes the first sub-image and the second sub-image extracted from the image are uniformly distributed between the plurality of rows of the array.
15. The system of claim 13, wherein the processor is further configured to estimate the optical flow between the first sub-image and the second sub-image.
16. The system of claim 13, wherein the processor is further configured to determine an intermediate image that is interpolated between the first sub-image and the second sub-image.
17. The system of claim 16, wherein the first sub-image, the intermediate image, and the second sub-image are combined to create the video.
18. The system of claim 14, wherein the processor is further configured to: receive a coded pattern for reading the plurality of rows of the array, wherein the coded pattern is an interlaced pattern in which readout time for each row is uniformly distributed into the plurality of sub-images.
19. The system of claim 14, further comprising: receive a coded pattern for reading the plurality of rows of the array, wherein the coded pattern is a staggered pattern in which an order of reading out a subset of the plurality of rows is reversed for the subset.
20. A system for reading an image of a scene, the system comprising: an image sensor comprising an array having a plurality of rows; and at least one processor programmed to: obtain a first image of the scene at a first exposure; determine an adjusted exposure time for each of the plurality of rows of the array based at least in part on scene radiance in the first image of the scene, wherein the adjusted exposure time includes a first exposure time for a first row of the plurality of rows and a second exposure time for a second row of the plurality of rows and wherein the first exposure time is adjusted to be different from the second exposure time; obtain a second image of the scene using the first exposure time for the first row of the plurality of rows of the array and the second exposure time for the second row of the plurality of rows of the array; and generate a high dynamic range image based on image data from at least the first row and the second row of the plurality of rows.
21. The system of claim 20, wherein the at least one processor is further programmed to normalize image data of the second image of the scene with respect to the adjusted exposure time applied to each of the plurality of rows of the array.
22. The system of claim 20, wherein the at least one processor is further programmed to determine readout times of the plurality of rows of the array.
23. The system of claim 22, wherein the at least one processor is further programmed to extract a plurality of sub-images from the image using the determined readout times and the adjusted exposure times, wherein the plurality of sub-images are uniformly distributed between the plurality of rows in the array.
24. The system of claim 23, wherein the at least one processor is further programmed to: estimate optical flow between a first sub-image and a second sub-image of the plurality of sub-images, wherein estimating the optical flow comprises estimating the optical flow based on a motion of brightness patterns between the first sub-image and the second sub-image or estimating the optical flow by comparing a position of particular points in the first sub-image and the second sub-image; determine motion information based at least in part on the estimated optical flow; and apply the determined motion information to enhance the high dynamic range image that is generated by combining the plurality of sub-images extracted from the image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
DETAILED DESCRIPTION
(26) In accordance with various embodiments, mechanisms for coded shutter are provided. In some embodiments, mechanisms are provided that control the readout timing and exposure length for each row of a pixel array in an image sensor, thereby flexibly sampling the three-dimensional space-time value of a scene and capturing sub-images that effectively encode motion and dynamic range information within a single captured image. Instead of sending out the row-reset and row-select signals sequentially, signals can be sent using a coded pattern.
(27) In some embodiments, the readout timing can be controlled by providing an interlaced readout pattern, such as the interlaced readout pattern shown in
(28) In some embodiments, additionally or alternatively to controlling the readout timing, mechanisms are provided for controlling the exposure length. As described herein, an optimal exposure for each row of a pixel array can be determined. An illustrative coded exposure pattern, where an optimal exposure has been determined for each row of a pixel array, is shown in
(29) Upon exposing each of the plurality of rows in an image sensor using a coded pattern (e.g., an interlaced readout pattern, a staggered readout pattern, a coded exposure pattern, and/or any suitable combination thereof), a single image is captured that encodes motion and/or dynamic range. From this image, multiple sub-images can be read out or extracted from different subsets of the rows of the pixel array. The multiple sub-images and/or other information encoded in the single captured image can be used to determine optical flow, generate a skew-free image, generate a slow motion video, generate a high dynamic range (HDR) image, etc.
(30) These coded rolling shutter mechanisms can be implemented in an image sensor (such as the image sensor shown in
(31) It should be noted that these mechanisms can be used in a variety of applications, such as skew compensation, recovering slow motion in images of moving objects, high-speed photography, and high dynamic range (HDR) imaging. For example, these mechanisms can be used to improve sampling over the time dimension for high-speed photography. In another example, the coded rolling shutter mechanisms can be used to estimate optical flow, which can be useful for recovering slow motion in an image of a moving object, generating a skew-free image, or removing motion blur in an image due to camera shake. In yet another example, these mechanisms can be used to control readout timing and exposure length to capture high dynamic range images from a single captured image. In a further example, these mechanisms can be used to control readout timing and exposure length to recover a skew-free video from a single captured image.
(32) Referring back to
(33) In accordance with some embodiments of the disclosed subject matter, the reset timing, t.sub.s(y), and the readout timing, t.sub.r(y), can be controlled by the address generator. Accordingly, the address generator can also control the exposure time, Δt.sub.e. For example,
(34) It should be noted that controlling the readout timing and exposure length for the rows of an image sensor provide greater flexibility for sampling the three-dimensional space-time volume of a scene into a single two-dimensional image. To illustrate this, let E(x,y,t) denote the radiance of a scene point (x,y) at time t, and let S(x,y,t) denote the shutter function of a camera. The captured image, I(x,y), can then be represented as:
I(x,y)=∫.sub.−∞.sup.∞E(x,y,t).Math.S(x,y,t)dt
(35) The coded rolling shutter mechanisms described herein extend the shutter function, S(x,y,t), to two dimensions as both readout timing, t.sub.r(y), and the exposure time, Δt.sub.e, can be row-specific. That is, the shutter function for coded rolling shutter is a function of time and image row index. It should be noted that this is unlike conventional rolling shutter and other shutter mechanisms that are merely one-dimensional functions of time.
(36) As mentioned previously, as there is one row of readout circuits, the readout timings for different rows cannot overlap. This imposes a constraint on readout timing, t.sub.r(y). More particularly, for an image sensor with M rows, the total readout time for one frame is MΔt.sub.r. Each readout timing pattern corresponds to a one-to-one assignment of the M readout timing slots to the M rows. The assignment for conventional rolling shutter is shown in
(37) In accordance with some embodiments, these coded rolling shutter mechanisms can be used to better sample the time dimension by controlling (e.g., shuffling) the readout timings, t.sub.r(y), among rows. For example, in some embodiments, an interlaced readout pattern, such as the one shown in
(38)
(39) It should be noted that, although
(40) For the interlaced readout pattern, the readout timing, t.sub.r(y) for the y-th row can be represented as follows:
(41)
where the image sensor has a total of M rows and where └.Math.┘ is the floor function.
(42) In some embodiments, the interlaced readout pattern can be used for skew compensation and high speed photography. More particularly, an address generator, such as address generator 110 of
(43) Regarding skew in the sub-images, it should be noted that the time lag between the top row and the bottom row of each sub-image is MΔt.sub.r/K, where M is the total number of rows in the image sensor, Δt.sub.r is the readout time, and K is the number of sub-images. As described above, for conventional rolling shutter, the time lag between the top row and the bottom row for one frame is MΔt.sub.r. Thus, the skew in these sub-images is 1/K time of the skew in conventional rolling shutter. Accordingly, the skew in the K sub-images is substantially reduced from the skew of an image captured using conventional rolling shutter.
(44) In addition, it should be noted that the time lag between two consecutive sub-images is also reduced to MΔt.sub.r/K. Thus, the frame rate increases K times between the obtained sub-images. Accordingly, the frame rate in the K sub-images is substantially increased from the frame rate of an image captured using conventional rolling shutter.
(45) Illustrative examples of the sub-images obtained using the interlaced readout pattern of
(46) It should also be noted that cubic interpolation or any other suitable approach can be used to resize sub-images 820 and 830 vertically to full resolution. For example, for a captured input image with a resolution of 640×480 (M=480 and K=2), each sub-image read out from the captured input image has a resolution of 640×240. Note that, using the interlaced readout pattern, full resolution in the horizontal direction is preserved. Each sub-image can be resized to 640×480 or full resolution by interpolation. The interpolated images are represented by solid lines 910 and 920 (
(47) In some embodiments, the extracted sub-images can be used to estimate optical flow. As described above, cubic interpolation or any other suitable interpolation approach can be used to resize the two sub-images I.sub.1 and I.sub.2 vertically to full resolution (shown as solid lines 910 and 920 in
(48) Using the determined optical flow, intermediate images within a shaded area 930 and between the interpolated sub-images (I.sub.1 and I.sub.2) can be determined using bidirectional interpolation. For example, in some embodiments, intermediate images can be determined using the following equation:
I.sub.w(p)=(1−w)I.sub.1(p−wu.sub.w(p))+wI.sub.2(p+(1−w)u.sub.w(p)),
where 0≦w≦1, p=(x,y) represents one pixel, and u.sub.w(p) is the forward-warped optical flow computed as u.sub.w (p+wu.sub.0(p))=u.sub.0(p). For example, as shown in
(49) In some embodiments, a skew-free image can be interpolated from the obtained sub-images. As shown in
(50) To illustrate the skew compensation feature of the disclosed subject matter,
(51) In addition to compensating for skew, in some embodiments, the sub-images read out from the single captured image using the interlaced readout pattern of
(52) It should be noted that motion blur due to camera shake is a common problem in photography. Merely pressing a shutter release button on a camera can in and of itself cause the camera to shake, and unfortunately result in blurred images. The compact form and small lenses of many of these digital cameras only services to increase the camera shake problem.
(53) In some embodiments, the sub-images and the estimated optical flow from the sub-images can be used to remove motion blur from a single image caused by camera shake or any other suitable motion (sometimes referred to herein as “motion de-blurring”). As described above, cubic interpolation or any other suitable interpolation approach can be used to resize the two sub-images I.sub.1 and I.sub.2 vertically to full resolution (see, e.g.,
(54) Alternatively, in accordance with some embodiments, the readout timing of the image sensor can be controlled by implementing a staggered readout pattern, such as the staggered readout pattern shown in
(55) In a more particular example,
(56) For the staggered readout pattern, the readout timing, t.sub.r(y), for the y-th row can be represented as follows:
(57)
(58) It should be noted that the time lag within each sub-image for staggered readout is (M−K+1)Δt.sub.r.
(59) It should also be noted that the time lag between two consecutive sub-images is Δt.sub.r, which is a substantially higher frame rate than the frame rate achieved using conventional rolling shutter. The readout time, Δt.sub.r, is generally between about 15×10.sup.−6 seconds or 15 microseconds and about 40×10.sup.−6 or 40 microseconds. Accordingly, an image sensor that uses staggered readout can be used for ultra-high speed photography of time-critical events, such as a speeding bullet, a bursting balloon, a foot touching the ground, etc.
(60)
(61) Generally speaking, high dynamic range (HDR) imaging generally requires either multiple images of a particular scene that are taken with different exposures or specially-designed image sensors and/or hardware. Capturing multiple images of a particular scene with different exposures requires a static scene and a stable camera to avoid ghosting and/or motion blur. Specially-designed image sensors, on the other hand, are expensive. Accordingly, these generally make high dynamic range imaging inconvenient or impractical, especially for handheld consumer cameras.
(62) In accordance with some embodiments, high dynamic range images can be obtained from a single captured image by controlling the exposure length, Δt.sub.e(y), for each row of the pixel array. Moreover, as described herein, by controlling readout timing, Δt.sub.r(y), and the exposure length, Δt.sub.e(y), for each row of the pixel array, high dynamic range images, where motion blur is substantially removed, can be obtained from a single captured image.
(63)
(64) In some embodiments, an optimal exposure for each row of the pixel array can be determined (sometimes referred to herein as “adaptive row-wise auto-exposure”) using a process 1500 as illustrated in
(65) In response to capturing the temporary image, an optimal exposure can be determined for each row of the pixel array at 1520. Generally speaking, an optimal exposure for a given row can be determined that minimizes the number of saturated and under-exposed pixels within the row while maintaining a substantially number of pixels well-exposed. As shown in
(66) It should be noted that scene radiance can be measured everywhere except in the saturated regions, where no information is recorded. It should also be noted that a small value for the scale factor, s, corresponds to a conservative auto-exposure algorithm.
(67) Accordingly, the optimal exposure, Δt.sub.e(y), for the y-th row can be found by maximizing the following equation:
(68)
where μ(i) can be defined as:
μ(i)=μ.sub.s(i)+λ.sub.dμ.sub.d(i)+λ.sub.gμ.sub.g(i),
which includes weights λ.sub.d and λ.sub.g and lower and upper bounds of exposure adjustment Δt and Δt.sub.u.
(69) Referring back to
(70) In some embodiments, the second image (I.sub.c) can be normalized to generate the final output image (I.sub.r) at 1540. For example, in some embodiments, the second image can be normalized by dividing the second image by the row-wise exposure, Δt.sub.e(y). Accordingly:
(71)
(72) Illustrative examples of the images and exposures obtained using process 1500 of
(73) It should be noted that the adaptive row-wise auto-exposure mechanism described above requires little to no image processing. However, in some embodiments, additional post-processing, such as de-noising, can be performed. For example, noise amplification along the vertical direction, which can be derived from the exposure patterns, can be considered. In another example, for scenes where the dynamic range is predominantly spanned in the horizontal direction (e.g., a dark room that is being viewed from the outside), the adaptive row-wise auto-exposure mechanism can revert the imaging device to use a conventional auto-exposure feature.
(74) In some embodiments, high dynamic range images can be obtained from a single captured image using the above-mentioned adaptive row-wise auto-exposure approach with the previously described coded readout pattern. Using adaptive row-wise auto-exposure to determine optimal exposure for each row in the pixel array along with a coded readout pattern, multiple exposures can be coded into a single captured image and planar camera motion can be estimated to remove blue due to camera shake.
(75)
(76) These sub-images (I.sub.1, I.sub.2, and I.sub.3) can be used to compose a high dynamic range image. For example, for static scenes/cameras, an output high dynamic range image can be produced by combining the sub-images of multiple exposures.
(77) In addition, in some embodiments, these sub-images (I.sub.1, I.sub.2, and I.sub.3) obtained from using coded pattern 1900 can be used to compose a high dynamic range and remove motion blur due to camera shake as shown in process flow 1950. For example, motion blur due to camera shake is a common problem in photography and the compact form and small lenses of handheld digital cameras only services to increase the camera shake problem. For handheld digital cameras, motion blur in images caused by camera shake is inevitable, especially for long exposure times. Accordingly, in some embodiments, where camera shake is an issue, optical flow can be determined between the sub-images to account for the camera shake.
(78) It should be noted that, as the sub-images are obtained using a staggered readout, the time lag between the sub-images is small. Therefore, the camera shake velocity can generally be the same in the sub-images. It should be also noted that, within one frame time, the amount of motion caused by camera shake is small and can be approximated as a planar motion.
(79) In some embodiments, the sub-images, which are sampled at different timings, and the estimated optical flow from the sub-images can be used to remove motion blur from a single image caused by camera shake or any other suitable motion as shown in flow 1950. A motion vector, {right arrow over (u)}=[u.sub.x, u.sub.y], can be estimated from sub-images I.sub.1 and I.sub.2 by the estimated optical flow:
{right arrow over (u)}=average(computeFlow(I.sub.1,I.sub.2−I.sub.1))
The motion vector can be used determine blur kernels. More particularly, by de-blurring two composed images, I.sub.1⊕I.sub.2 and I.sub.1⊕I.sub.2⊕I.sub.3, ringing can be effectively suppressed, where the operator ⊕ denotes that the images are first center-aligned with the motion vector, {right arrow over (u)}, and then added together. The two de-blurred images can be represented as:
I.sub.b1=deblur(I.sub.1⊕I.sub.2,{right arrow over (u)},Δt.sub.e1,Δt.sub.e2)
I.sub.b2=deblur(I.sub.1⊕I.sub.2⊕I.sub.3,{right arrow over (u)},Δt.sub.e1,Δt.sub.e2,Δt.sub.e3)
Accordingly, the output de-blurred high dynamic range (HDR) image can be calculated by the following:
(80)
(81) It should be noted that the optimal exposure ratios Δt.sub.e3:Δt.sub.e2:Δt.sub.e1 (e.g., 8Δt.sub.e1:2 Δt.sub.e1:Δt.sub.e1) can be determined based at least in part on the desired extended dynamic range and the noise amplification due to motion de-blurring. For example, a larger Δt.sub.e3:Δt.sub.e1 exposure ratio provides a larger extended dynamic range, but can also amplify noise during motion de-blurring.
(82) Illustrative examples of the coded input image, sub-images, and output high dynamic range image obtained using the staggered readout and multiple exposure coding 1900 of
(83) Although
(84) It should be noted that, as described above in connection with interlaced readout patterns, the sub-images obtained using coded pattern 2100 has substantially reduced skew than images obtained using conventional rolling shutter. Accordingly, coded pattern 2100 of
(85) In accordance with some embodiments, mechanisms are provided for controlling exposure length and readout times that can recover a skew-free video from a single captured image. Generally speaking, by modeling the scene brightness for one pixel (x,y) over time t as a one-dimensional signal, the corresponding pixel intensity in the captured image is a linear projection of this one-dimensional signal with the exposure pattern. Accordingly, with randomly coded exposure patterns, space-time volume (e.g., a skew-free video) can be reconstructed from a single captured image by exploiting the sparsity in signal gradients.
(86) In a more particular example, a skew-free video can be recovered from a single captured image using compressive sensing techniques. Compressive sensing techniques provide an approach for reconstructing sparse signals from far fewer samples than required by other techniques, such as the Shannon sampling theorem. As described above, the captured image I (x,y) can be described as a line-integral measurement of the space-time volume E (x,y,t). Accordingly, by controlling the shutter function, S (x,y,t), from a single image, several measurements can be acquired within neighboring rows to recover the space-time volume E (x,y,t).
(87) Consider the time-varying appearance of a given pixel (x,y) within one time frame E (x,y,t), where 0≦t≦MΔt.sub.r. This can be discretized into a P-element vector, where
(88)
As shown in the coded pattern 2200 of
(89) If b denotes the intensities of the K neighboring pixels in the input image I, b can be represented as:
(90)
Accordingly, Ax=b, where A is a K×P matrix representing the coding patterns.
(91) The process for recovering a skew-free video from a single captured image begins by obtaining an initial estimate E.sub.0 using, for example, block matching and linear interpolation. In a more particular example, the input image can be normalized by dividing by the exposure such that: I.sub.n(x,y)=I(x,y)/Δt.sub.e(y). Each pixel (x,y) in the normalized image (In) corresponds to a set of sampled voxels in the initial estimate, E.sub.0(x,y,t), where t.sub.s(y)≦t≦t.sub.r(y). These sampled voxels can be used to fill in portions of the initial estimate, E.sub.0.
(92)
(93) In some embodiments, a particular voxel can be interpolated multiple times. If a particular voxel is interpolated multiple times, the value of that voxel can be set to the result computed from the matched pair with the minimum matching error. This fills in a substantial portion of the initial estimate, E.sub.0. The remaining voxels can then be initialized to the values in the normalized image (I.sub.n(x,y)) at the corresponding rows.
(94) The initial estimate, E.sub.0, can then be used to reconstruct the time-varying appearance x for each pixel by exploiting the sparsity in the gradient of the pixel's radiance over time:
min|x′|+λ|x−x.sub.0|
where |x′| is the L−1 norm of the gradient of x over time, λ is a weight parameter, and x.sub.0 is the corresponding signal in E.sub.0. An optimization using the initial estimate can be run multiple times (e.g., twice). The output of the first iteration can be used to adaptively adjust the values of K for different rows. It should be noted that for rows with large variance in the recovered appearance over time, the K value can be lowered, and vice versa. The adjustment is performed based on a precomputed or predetermined mapping between K values, variances of appearance over time, and reconstruction errors. Multiple iterations can continue to be performed and, in some embodiments, a median filter can be applied prior to generating the final output.
(95)
(96) In some embodiments, if a frame buffer is available on the CMOS image sensor, intermittent exposures can be implemented for each pixel, where each pixel can receive multiple row-select and row-reset signals during one frame as shown in
(97) In some embodiments, hardware used in connection with the coded mechanisms can include an image capture device. The image capture device can be any suitable device for capturing images and/or video, such as a portable camera, a video camera or recorder, a computer camera, a scanner, a mobile telephone, a personal data assistant, a closed-circuit television camera, a security camera, an Internet Protocol camera, etc.
(98) The image capture device can include an image sensor, such as the image sensor shown in
(99) In some embodiments, the hardware can also include an image processor. The image processor can be any suitable device that can process images and image-related data as described herein. For example, the image processor can be a general purpose device such as a computer or a special purpose device, such as a client, a server, an image capture device (such as a camera, video recorder, scanner, mobile telephone, personal data assistant, etc.), etc. It should be noted that any of these general or special purpose devices can include any suitable components such as a processor (which can be a microprocessor, digital signal processor, a controller, etc.), memory, communication interfaces, display controllers, input devices, etc.
(100) In some embodiments, the hardware can also include image storage. The image storage can be any suitable device for storing images such as memory (e.g., non-volatile memory), an interface to an external device (such as a thumb drive, a memory stick, a network server, or other storage or target device), a disk drive, a network drive, a database, a server, etc.
(101) In some embodiments, any suitable computer readable media can be used for storing instructions for performing the processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
(102) Accordingly, methods and systems for coded readout of an image are provided.
(103) Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is only limited by the claims which follow. Features of the disclosed embodiments can be combined and rearranged in various ways.