MOTION COMPENSATION FOR IMAGE SENSOR WITH A BLOCK BASED ANALOG-TO-DIGITAL CONVERTER

20170195574 ยท 2017-07-06

    Inventors

    Cpc classification

    International classification

    Abstract

    An electronic device including a motion sensor and motion correction circuitry. The motion sensor detects motion of the electronic device and outputs motion information based on the detected motion. The motion correction circuitry corrects raw domain image data, received from an image sensor that has a block-based analog-to-digital converter architecture, based on the motion information. The corrected image data is output in a same raw domain image data format as the uncorrected raw domain image data.

    Claims

    1. An electronic device comprising: a motion sensor configured to detect motion of the electronic device, and output motion information based on the motion that is detected; and motion correction circuitry configured to receive raw domain image data from an image sensor that has a block-based analog-to-digital converter architecture and is separate from the motion sensor, and correct the raw domain image data based on the motion information.

    2. The electronic device of claim 1, wherein the motion correction circuitry is configured to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations, and outputting corrected pixel values as a motion-corrected frame in the data format of the raw domain image data.

    3. The electronic device of claim 2, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing an interpolation utilizing at least one of the pixel values and a corresponding corrected location.

    4. The electronic device of claim 2, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing bi-linear interpolation utilizing four of the pixel values and their respective corrected locations.

    5. An electronic device comprising: a motion sensor configured to detect motion of the electronic device and output motion information based on the detected motion; and motion correction circuitry configured to correct raw domain image data, received from an image sensor that has a block-based analog-to-digital converter architecture, based on the motion information, wherein the motion correction circuitry is configured to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations, and outputting corrected pixel values as a motion-corrected frame in the data format of the raw domain image data, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing bi-linear interpolation utilizing four of the pixel values and their respective corrected locations, and wherein each of the pixel values in the raw domain image data corresponds to one of a plurality of colors according to a color pattern associated with the data format of the raw domain image data, each of the corrected pixel values in the motion-corrected frame corresponds to one of the plurality of colors according to the color pattern, and the four of the pixel values that are utilized by the motion correction circuitry in the bi-linear interpolation for the given one of the corrected pixel values correspond to a same one of the plurality of colors that the given one of the corrected pixel values corresponds to.

    6. An electronic device comprising: a motion sensor configured to detect motion of the electronic device and output motion information based on the detected motion; and motion correction circuitry configured to correct raw domain image data, received from an image sensor that has a block-based analog-to-digital converter architecture, based on the motion information, wherein the motion correction circuitry is configured to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations, and outputting corrected pixel values as a motion-corrected frame in the data format of the raw domain image data, and wherein the block-based analog-to-digital converter architecture includes having a plurality of blocks that each include N light sensing elements that share an analog-to-digital converter, N>1, the N light sensing elements in each block being exposed sequentially in N exposure phases per frame period, each of the pixel values corresponds to one of the exposure phases, the motion sensor is configured to output the motion information as motion vectors that each correspond to one of the exposure phases, the motion correction circuitry is configured to determine the corrected location of each of the pixel values based on the one of the motion vectors that corresponds to the one of the exposure phases to which the respective one of the pixel values corresponds.

    7. The electronic device of claim 6, wherein N=4, each of the pixel values in the raw domain image data corresponds to one of four colors according to a color pattern associated with the data format of the raw domain image data, each of the corrected pixel values in the motion-corrected frame corresponds to one of the four colors according to the color pattern, and each of the phases corresponds to one of the four colors.

    8. The electronic device of claim 7, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing bi-linear interpolation utilizing four of the pixel values and their respective corrected locations, the four of the pixel values corresponding to a same one of the four colors that the given one of the corrected pixel values corresponds to.

    9. The electronic device of claim 6, wherein each of the pixel values in the raw domain image data corresponds to one of a plurality of colors according to a color pattern associated with the data format of the raw domain image data, each of the corrected pixel values in the motion-corrected frame corresponds to one of the plurality of colors according to the color pattern, and each of the phases corresponds to one of the plurality of colors the motion correction circuitry is configured to generate each of the corrected pixel values by performing interpolations, each of the interpolations utilizing only ones of the pixel values that correspond to a same one of the plurality of colors that the one of the corrected pixel values that is being generated corresponds to.

    10. An image sensing device, comprising: an image sensor that has a block-based analog-to-digital converter architecture and is configured to output raw domain image data; a motion sensor that is separate from the image sensor and is configured to detect motion of the image sensing device, and output motion information based on the motion that is detected; and motion correction circuitry configured to receive the raw domain image data from the image sensor, and correct the raw domain image data based on the motion information.

    11. The image sensing device of claim 10, wherein the motion correction circuitry is configured to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations, and outputting corrected pixel values as a motion-corrected frame in the data format of the raw domain image data.

    12. The image sensing device of claim 11, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing an interpolation utilizing at least one of the pixel values and its corrected location.

    13. The image sensing device of claim 11, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing bi-linear interpolation utilizing four of the pixel values and their respective corrected locations.

    14. An image sensing device, comprising: an image sensor that has a block-based analog-to-digital converter architecture; a motion sensor configured to detect motion of the image sensing device and output motion information based on the detected motion; and motion correction circuitry configured to correct raw domain image data, received from the image sensor, based on the motion information, wherein the motion correction circuitry is configured to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations, and outputting corrected pixel values as a motion-corrected frame in the data format of the raw domain image data, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing bi-linear interpolation utilizing four of the pixel values and their respective corrected locations, and wherein each of the pixel values in the raw domain image data corresponds to one of a plurality of colors according to a color pattern associated with the data format of the raw domain image data, each of the corrected pixel values in the motion-corrected frame corresponds to one of the plurality of colors according to the color pattern, and the four of the pixel values that are utilized by the motion correction circuitry in the bi-linear interpolation for the given one of the corrected pixel values correspond to a same one of the plurality of colors that the given one of the corrected pixel values corresponds to.

    15. An image sensing device, comprising: an image sensor that has a block-based analog-to-digital converter architecture; a motion sensor configured to detect motion of the image sensing device and output motion information based on the detected motion; and motion correction circuitry configured to correct raw domain image data, received from the image sensor, based on the motion information, wherein the motion correction circuitry is configured to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations, and outputting corrected pixel values as a motion-corrected frame in the data format of the raw domain image data, and wherein the block-based analog-to-digital converter architecture includes having a plurality of blocks that each include N light sensing elements that share an analog-to-digital converter, N>1, the N light sensing elements in each block being exposed sequentially in N exposure phases per frame period, each of the pixel values corresponds to one of the exposure phases, the motion sensor is configured to output the motion information as motion vectors that each correspond to one of the exposure phases, the motion correction circuitry is configured to determine the corrected location of each of the pixel values based on the one of the motion vectors that corresponds to the one of the exposure phases to which the respective one of the pixel values corresponds.

    16. The image sensing device of claim 15, wherein N=4, each of the pixel values in the raw domain image data corresponds to one of four colors according to a color pattern associated with the data format of the raw domain image data, each of the corrected pixel values in the motion-corrected frame corresponds to one of the four colors according to the color pattern, and each of the phases corresponds to one of the four colors.

    17. The image sensing device of claim 16, wherein the motion correction circuitry is configured to generate a given one of the corrected pixel values by performing bi-linear interpolation utilizing four of the pixel values and their respective corrected locations, the four of the pixel values corresponding to a same one of the four colors that the given one of the corrected pixel values corresponds to.

    18. The image sensing device of claim 15, wherein each of the pixel values in the raw domain image data corresponds to one of a plurality of colors according to a color pattern associated with the data format of the raw domain image data, each of the corrected pixel values in the motion-corrected frame corresponds to one of the plurality of colors according to the color pattern, and each of the phases corresponds to one of the plurality of colors the motion correction circuitry is configured to generate each of the corrected pixel values by performing interpolations, each of the interpolations utilizing only ones of the pixel values that correspond to a same one of the plurality of colors that the one of the corrected pixel values that is being generated corresponds to.

    19. The image sensing device of claim 10, wherein the image sensor is part of a same circuit package as at least one of the motion sensor or the motion correction circuitry.

    20. The image sensing device of claim 10, wherein the image sensor is part of a different circuit package than at least one of the motion sensor or the motion correction circuitry.

    21. An electronic apparatus comprising: the imaging sensing device of claim 10, and an image signal processor configured to execute predetermined signal processing on motion-corrected raw domain image data output by the motion correction circuitry.

    22. The electronic apparatus of claim 21, wherein the image signal processor is part of a same circuit package as at least one of the image sensor, the motion sensor, or the motion correction circuitry.

    23. The image sensing device of claim 10, further comprising: an image signal processor, wherein the image signal processor is part of a different circuit package than at least one of the image sensor, the motion sensor, or the motion correction circuitry.

    24. A method of controlling a motion correction device, the method comprising: detecting, with a motion sensor, motion of the motion correction device; generating motion information based on the motion that is detected by the motion sensor; receiving raw domain image data from an image sensor that has a block-based analog-to-digital converter architecture and is separate from the motion sensor; and correcting the raw domain image data based on the motion information.

    25. The method of claim 24, further comprising: causing the motion correction device to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, and generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations.

    26. The method of claim 25, further comprising: outputting the corrected pixel values as a motion-corrected frame in the data format of the raw domain image data.

    27. A non-transitory computer readable medium storing program code executable by a processor of a motion correction device to cause the processor to perform a set of operations comprising: detecting, with a motion sensor, motion of the motion correction device; generating motion information based on the motion that is detected; receiving raw domain image data from an image sensor that has a block-based analog-to-digital converter architecture and is separate from the motion sensor; and correcting the raw domain image data based on the motion information.

    28. The non-transitory computer readable medium of claim 27, wherein the set of operations further includes causing the motion correction device to correct pixel values of the raw domain image data by: determining uncorrected locations, in a two-dimensional space, of the pixel values based on a data format of the raw domain image data, the uncorrected locations forming a regular grid, determining a corrected location, in the two-dimensional space, for each of the pixel values based on the pixel values' respective uncorrected locations and the motion information, and generating a corrected pixel value for each node of the regular grid based on the pixel values and their respective corrected locations.

    29. The non-transitory computer readable medium of claim 28, wherein the set of operations further includes outputting the corrected pixel values as a motion-corrected frame in the data format of the raw domain image data.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0024] These and other more detailed and specific features of the present invention are more fully disclosed in the following specification, reference being had to the accompanying drawings, in which:

    [0025] FIG. 1 is a conceptual diagram illustrating an imaging system 10.

    [0026] FIG. 2 is a conceptual diagram illustrating a block based ADC architecture.

    [0027] FIG. 3 is a timing diagram illustrating a rolling shutter operation of the block based ADC architecture of FIG. 2.

    [0028] FIG. 4 is a conceptual diagram illustrating a PGS architecture.

    [0029] FIG. 5 is a timing diagram illustrating a rolling shutter operation of the PGS architecture of FIG. 2.

    [0030] FIG. 6 is a conceptual diagram illustrating a block based ADC architecture with more pixels per block than exposure phases.

    [0031] FIGS. 7A-C are a conceptual diagrams illustrating imaging systems 10A-C.

    [0032] FIG. 8 is a conceptual diagram illustrating a Bayer color filter pattern.

    [0033] FIG. 9 is a conceptual diagram illustrating motion of the image sensor 100A-C.

    [0034] FIG. 10 is a conceptual diagram illustrating locations of objects in a scene and locations in the image of the pixel values corresponding to the objects.

    [0035] FIG. 11 is process flow chart illustrating an exemplary motion correction process.

    [0036] FIG. 12 is conceptual diagram illustrating uncorrected locations of pixel values in a two-dimensional space and corrected locations of the pixel values.

    [0037] FIG. 13 is conceptual diagram illustrating corrected locations of G1 pixel values and grid nodes corresponding to the G1 phase.

    [0038] FIG. 14 is conceptual diagram illustrating bi-linear interpolation.

    [0039] FIGS. 15A-B illustrate exemplary images taken without motion correction processing and with motion correction processing.

    [0040] FIG. 16 is conceptual diagram illustrating a block based ADC architecture with sixteen pixels per block and a Bayer color filter pattern.

    [0041] FIG. 17 is a conceptual diagram illustrating motion of the image sensor 100A-C.

    [0042] FIG. 18 is a conceptual diagram illustrating locations of objects in a scene and locations in the image of the pixel values corresponding to the objects.

    [0043] FIG. 19 is conceptual diagram illustrating uncorrected locations of pixel values in a two-dimensional space and corrected locations of the pixel values.

    [0044] FIG. 20 is conceptual diagram illustrating corrected locations of R pixel values and grid nodes corresponding to the R phase.

    DETAILED DESCRIPTION OF THE INVENTION

    [0045] In the following description, for purposes of explanation, numerous details are set forth, such as flowcharts and system configurations, in order to provide an understanding of one or more embodiments of the present invention. However, it is and will be apparent to one skilled in the art that these specific details are not required in order to practice the present invention.

    [Configuration of Imaging System]

    [0046] FIGS. 7A-7C illustrate exemplary imaging systems 10A-10C. Each of the imaging systems 10A-10C includes an optical device 101, an image sensor 100A-100C, and an image signal processor 110. The optical device 101 includes one or more optical elements that guide incident light onto the image sensor 100A-C. For example, the optical device 101 may comprise an objective lens that focuses the light so as to form an image on the image sensor 100A-C, and may additionally or alternatively include zoom lenses, micro lens arrays, and other optical elements as well known in the art. Each of the image sensors 100A-100C includes an array 102 of light sensing elements (pixels) 201, block-ADC circuitry 103, readout circuitry 104, and sequencer/timing control circuitry 108. The light sensing elements 201 of the array 102 and the block-ADC circuitry 103 can be fabricated on different layers and connected via TSVs. The readout circuitry 104 interfaces the output of the block-ADC circuitry 103 and delivers the pixel data to the output of the sensor 100A-100C. The sequencer/timing control circuitry 108 controls the ordering and the timing for reading the pixels in the array 102.

    [0047] In contrast to the imaging system 10 of FIG. 1, the exemplary imaging systems 10A-10C of FIGS. 7A-7C additionally include components for raw domain image data motion correction. Specifically, each of the imaging system 10A-B includes at least a motion sensor 701 and motion correction circuitry 702 for performing raw domain image data motion correction. The motion sensor 701 and the motion correction circuitry 702 may be provided in various modular configurations relative to one another and to the other components of the imaging system 10A-10C. In particular, an imaging system may be formed by combining separately prepared modules/circuit packages/devices (which may or may not have been manufactured by different parties); the motion sensor 701 and the motion correction circuitry 702 may be provided in any of a number of such modular configurations. FIGS. 7A-7C illustrate three examples of such modular configurations, but it will be understood that other modular configurations are also possible.

    [0048] In a first exemplary configuration, the motion sensor 701 and motion correction circuitry 702 are provided in a module/circuit-package/device (e.g., motion correction module 700) that is distinct from a module/circuit-package/device of the image sensor 100A. In this configuration, the motion correction module 700 may be manufactured separately from the image sensor 100A, and, for example, the motion correction module 700 and the image sensor 100A may be subsequently combined (along with additional components) as part of assembling an imaging system. For example, FIG. 7A illustrates one exemplary imaging system 10A utilizing the first exemplary configuration in which the motion correction module 700, the image sensor 100A, the optical device 101, and the image signal processor 110 are combined to form the imaging system 10A.

    [0049] In a second exemplary configuration, the motion sensor 701 and motion correction circuitry 702 are provided as part of the same module, circuit-package, or device as the other components of the image sensor 100B, which may be referred to herein as a motion compensated image sensor 100B. This motion compensated image sensor 100B produces output in raw image data format with the image content being free from the artifacts that arise from motion of the image sensor 100B. The motion-compensated raw image data output from the image sensor 100B may be stored as the raw image data for the imaged scene, and/or may be subjected to additional image signal processing, for example by an image signal processor 110. For example, FIG. 7B illustrates one exemplary imaging system 10B utilizing the second exemplary configuration, in which the image sensor 100B, the optical device 101, and the image signal processor 110 are combined to form the imaging system 10B.

    [0050] In a third exemplary configuration, the motion sensor 701, the motion correction circuitry 702, and the image signal processor 110 are provided as part of the same module, circuit-package, or device as the other components of the image sensor 100C. This motion compensated image sensor 100C outputs fully rendered images, such as full RGB data where every pixel includes R, G, and B components. For example, FIG. 7C illustrates one exemplary imaging system 10C utilizing the third exemplary configuration, in which the image sensor 100C and the optical device 101 are combined to form the imaging system 10C.

    [0051] The three exemplary configurations illustrated in FIGS. 7A-7C are provided merely as examples, and it will be understood that other configurations are within the scope of this disclosure. For example, the motion sensor 701 and the motion correction circuitry 702 do not have to be part of the same module, circuit-package, or device as one another or as any other components of the imaging system. Furthermore, the components of the image sensors 100A-100C and the components of the imaging systems 10A-10C discussed above are provided merely as examples, and additional or fewer components may be provided.

    [0052] Furthermore, the imaging systems 10A-10C may be included in a variety of electronic devices. For example, a digital camera may be equipped with the imaging system 10A-10C. As another example, a smart phone may be equipped with the imaging system 10A-10C. As another example, a personal computer may be equipped with the imaging system 10A-10C.

    [0053] The light sensing elements (pixels) 201 can be either monochrome or colored. For example, in an image sensor 100A-100C utilizing colored light sensing elements 201, each light sensing element 201 is covered with color filters arranged in a color mosaic (color pattern). Various embodiments disclosed herein use the Bayer configuration of color filters, which is illustrated in FIG. 8 (although only 16 pixels are illustrated in FIG. 8, it will be understood that the color pattern shown is repeated across the pixel array 102). Although the Bayer configuration is used herein as the primary example of a color configuration, other color filter configurations can also be used, which are well known in the art and thus will not be described herein in greater detail. Color sensors can be, for example RGB sensors, CMY sensors, CMYG sensors, RGBW sensors, etc.

    [Motion Compensation]

    [0054] In an image sensor 100A-C having a block-based architecture in which a PGS exposure/readout method is used, there are P*Q exposure phases (recall that each block has P*Q pixels 201). The P*Q pixels 201 within each block are exposed sequentially in the P*Q exposure phases, and pixels 201 that are in corresponding positions in the blocks are exposed at the same time. Thus, all of the pixels 201 in the array that are in a 0.sup.th position within their respective blocks are exposed in the 0.sup.th phase, all of the pixels 201 in the array that are in a 1.sup.st position within their respective blocks are exposed in the 1.sup.st phase, and so on up to the (P*Q1).sup.th phase in which all of the pixels 201 in the array that are in a (P*Q1).sup.th position within their respective blocks are exposed.

    [0055] The motion sensor 701 detects motion of the imaging system 10A-C that occurs during the P*Q exposure phases, and outputs motion information describing this motion. The motion sensor 701 can be composed of any sensor (or combination of sensors) capable of detecting the motion of the imaging system 10A-C, examples of which include: gyroscopes, accelerometers, microelectromechanical systems (MEMS) sensors, piezoelectric sensors, etc.

    [0056] The motion information output by the motion sensor 701 may include (or may be used to generate) a phase-specific motion vector for each of the P*Q phases, which describes motion of the image sensor 100A-C between a timing at which imaging of a given image frame starts and a timing that is associated with the phase to which the motion vector corresponds. Herein, the following notation will be used: s.sub.i designates the phase-specific motion vector that corresponds to the i.sup.th phase, where i is an index identifying the phase, and t.sub.i designates a timing that is associated with the i.sup.th phase. Thus, using the aforementioned notation, the vector s.sub.i may describe motion (i.e., the change in position) of the image sensor 100A-C that occurs between the start timing t.sub.0 and the timing t.sub.i. The vector s.sub.i may be determined and output by the motion sensor 701, or may be determined by the motion correction circuitry 702 based on the motion information output by the motion sensor 701.

    [0057] In certain embodiments, the motion information output by the motion sensor 701 may include a continuous stream of motion vectors that indicate the motion of the system 10A-C at sampled time intervals, and each of the phase-specific motion vectors s.sub.i may be equal to the vector sum of the motion vectors that were output by the motion sensor 701 between t.sub.0 and the timing t.sub.i (the motion correction circuitry 702, for example, may perform the aggregation). In some of these embodiments, the sampling time interval of the motion sensor 701 may be set so as to correspond to the interval between consecutive timings t.sub.i. In other embodiments, the sampling time intervals of the motion sensor 701 may be of any arbitrary duration that is less than a duration of the exposure/readout phase.

    [0058] In other embodiments, the motion information output by the motion sensor 701 is not a stream of motion vectors, but rather may be, for example, data that is interpreted by the motion correction circuitry 702 to obtain the motion vectors s.sub.i. For example, the motion information output by the motion sensor 701 may include orientation information from one or more gyroscopes, acceleration information from one or more accelerometers, measurements from magnetometers, etc.

    [0059] Of course, the motion of the image sensor 100A-C occurs in real space, and thus is three-dimensional, whereas the captured image is a two-dimensional representation of the three-dimensional imaged scene. Thus, the motion information detected by the motion sensor 701 may need to be converted into suitable two-dimensional format to be usable by the correction processfor example, the motion vectors s.sub.i may be the projection of the three-dimensional motion vectors onto the image plane of the image sensor 100A-C. This conversion from three-dimensional motion information to two-dimensional motion information may be done by the motion sensor 701, or by the correction circuitry 702.

    [0060] Moreover, the motion of the image sensor 100A-C may include rotational motion in addition to translational motion, and this motion may be corrected for as well. For example, a given rotational motion of the image device may be approximated by a translational motion vector that would produce approximately the same effect on the output image as the given rotational motion, and this translational motion vector may be used to correct the image data. For example, it may be determined (experimentally or mathematically), that, for a given focal point, a rotation of the camera in a given direction by x degrees changes the locations of objections in the resulting image by approximately the same amount as a translational movement of the camera of y mm without rotation, and thus a translational motion vector s.sub.i of magnitude y mm may be used to approximate the rotation of x degrees. The conversion of rotational motion information into a translational motion vector that approximates the rotational motion may be performed by the motion sensor 701, or by the correction circuitry 702. Of course, when rotational and translational motion both occur during a given phase, the motion can be combined into an overall motion vector that represents both the rotational and translation motion. In particular, if the rotation of the image sensor 100A-C up to the i.sup.th phase is approximated by s.sub.i.sup.rotation and if the two-dimensional translational motion of the image sensor 100A-C up to the i.sup.th phase is given by s.sub.i.sup.translation, then the motion vector s.sub.i that is used in the correction processing for pixel values of the i.sup.th phase would equal the vector sum s.sub.i.sup.rotation+s.sub.i.sup.translation. Accordingly, hereinafter the motion vectors s.sub.i will be assumed to be in two-dimensional format and to represent the two dimensional approximation of all of the various types of motion of the image sensor 100A-C combined.

    [0061] The timings t.sub.i associated with each phase may be defined in various ways. For example, the timing t.sub.i may correspond to the start of the i.sup.th exposure phase. As another example, the timing t.sub.i may be halfway through the the i.sup.th exposure phase. As another example, the timing t.sub.i may correspond to the end of the i.sup.th exposure phase.

    [0062] The motion correction circuitry 702 receives the stream of motion information from the motion sensor 701 and the raw pixel data from the readout circuitry 104, and applies motion correction to the raw pixel data based on the motion information. The motion correction (discussed further below) produces output data in the same data format as the raw data. Thus, for example, if the data input to the motion correction circuitry 702 is monochrome raw format, the data output from the motion correction circuitry 702 is monochrome raw format, or if the data input to the motion correction circuitry 702 is color Bayer raw format, the data output from the motion correction circuitry 702 is color Bayer raw format, etc. The motion correction circuitry 702 may be constituted by custom designed logic circuits such as application specific integrated circuit (ASIC), field programmable gate array (FPGA), other programmable logic circuits, or hardwired discrete logic circuits, etc. Alternatively, it can also be implemented such as embedded software running on a digital signal processor (DSP) in the image sensor, or general purpose graphics processing unit (GPU) or general purpose central processing unit (CPU). Furthermore, the motion correction circuitry can also be implemented using hardware or software modules external to the image sensor, wherein such external hardware and software modules would be linked to the image sensor via communication buses or standard interfaces such as parallel interfaces or high speed serial interfaces.

    [0063] FIGS. 9-14 illustrate an exemplary motion correction process executed by the motion correction circuitry 702. In the exemplary process illustrated in FIGS. 9-14, it is assumed that the exemplary image sensor 100A-C has four-pixels per block (P=Q=2) and a Bayer color configuration, such as is illustrated in FIG. 4, and that a PGS exposure/readout method is used, such as is illustrated in FIG. 5. Based on these assumptions, there are four different phases in which pixels 201 of the exemplary image sensor 100A-C are exposed to light, and each of these phases corresponds to all of the pixels of a given primary color of the Bayer configuration (R, G1, G2 or B). Thus, if the pixels in each block are read out in the order of R, G1, G2, and B, then the 0.sup.th phase corresponds to all R pixel values, the 1.sup.st phase corresponds to all G1 pixel values, the 2.sup.nd phase corresponds to all G2 pixel values, and the 3.sup.rd phase corresponds to all B pixel values.

    [0064] If there is no motion of the image sensor 100A-C between the exposure phases, then the pixels 201 would capture the objects in the scene according to the physical location of each pixel 201 in the image sensing array 102, and the image signal processor 110 would process the image into rendered images free from motion artifacts. However, because there almost always is motion of the image sensor 100A-C, the correct location of an object in the scene may not be captured in the image. For example, FIG. 9 illustrates exemplary motion of the image sensor 100A-C in four phases corresponding to time periods t0-t3, with the arrows representing the motion of the image sensor 100A-C during each phase. FIG. 10 includes a diagram that illustrates object locations in a scene on the left side of the diagram and pixel locations in the captured image on the right side of the diagram. As shown in FIG. 10, the captured objects are all located in the captured image at the pixel locations of the pixels that captured the respective objects (see the right side of the diagram), and these locations are different from the actual locations of the objects in the scene (see the left side of the diagram) due to the movement of the image sensor 100A-C that is shown in FIG. 9. For example, as shown in FIG. 10, when the G1 pixels are exposed in the first phase, the image sensor 100A-C has moved according to the vector s.sub.1, and therefore the G1 pixels during the first phase are actually imaging objects that are located in the scene at positions that are shifted based on the vector s.sub.1 from the locations in the scene that would correspond to the G1 pixel locations in the image if the motion had not occurred. In particular, the G1 pixel located at pixel position 1001 in the image would image an object in the scene located at position 1001 if there were no motion; however, because of the motion, the G1 pixel located at 1001 actually images an object that is located in the scene at position 1002, not the object at the position 1001. Similar effects occur with respect to the G2 pixels (locations shifted based on the vector s.sub.2) and the B pixels (locations shifted based on the vector s.sub.3), as illustrated in FIG. 10. Thus, the locations of the objects captured by the G1, G2, and B pixels in the image do not accurately reflect the true locations of these objects in the scene. Note that in this example the locations of R pixels in the image do correctly reflect the locations of the objects in the scene, because the 0.sup.th phase (R phase) is the initial imaging phase and the location of the image sensor 100A-C during this phase defines the scene being imaged and servers as a baseline from which further motion is measured. Thus, no vector s.sub.0 needs to be generated, or if the vector s.sub.0 is generated then it may be set to zero or ignored.

    [0065] In order to correct for the motion induced errors noted above, the motion correction circuitry 702 moves each pixel value of the originally captured image, based on the motion information, to a corrected location that corresponds to the actual location in the scene of the object that was imaged by the pixel. Then, the pixel values at their corrected locations are interpolated to obtain pixel values in a corrected image frame.

    [0066] For example, FIG. 11 illustrates an exemplary process for motion correction. In step 1110, the motion of the image sensor 100A-C is detected by the motion sensor 701, and the phase specific motion vectors s.sub.i are obtained. In the example noted above in which there are four phases, the motion vectors s.sub.1 through s.sub.3 may be obtained.

    [0067] In block 1120, a grid of pixel locations is defined in a hypothetical two-dimensional space, and the pixel values of an image frame are arrayed in nodes of the grid based on the relative locations of the pixels within the array 103 that output the pixel values. In other words, a nominal location for each pixel value is determined in the two-dimensional space, each nominal location corresponding to a grid node. Each of the nodes of the grid will correspond to an imaging phase based on the architecture of the image sensor 100A-C. For example, if the image sensor 100A-C uses a Bayer color pattern as shown in FIG. 8 and a PGS architecture with a 4 pixel group as shown in FIGS. 5-6, then the four phases will correspond to the colors R, G2, G2, and B, and each node in the grid will correspond to one of the colors in the same pattern as that shown in FIG. 8.

    [0068] In block 1125, the index i is initialized, and then in block 1130, the locations of the pixel values corresponding to the i.sup.th phase are shifted to corrected locations based on the motion vector s.sub.i. This is then repeated in a loop for each phase, with i being incremented each cycle (block 1145) until all desired phases have been considered (block 1140), at which point the process proceeds to block 1160. For example, when a Bayer pattern with four phases is used, the locations of the G1 pixels are all shifted to corrected locations based on the motion vector s.sub.1, the locations of the G2 pixels are all shifted to corrected locations based on the motion vector s.sub.2, and the locations of the B pixels are all shifted to corrected locations based on the motion vector s.sub.3; note that the R pixels are either not considered in step 1130 (i is initialized to 1, and the 0.sup.th phase is ignored), or if they are considered in step 1130, then the vector s.sub.0 may be set to zero, and hence their locations are not shifted. A visual representation of this process is found in FIG. 12. The left side of the diagram in FIG. 12 illustrates locations of the pixel values in the captured image. The right side of the diagram in FIG. 12 illustrates the locations of the pixel values in the hypothetical two-dimensional space after being shifted according to the motion vectors. As can be seen in FIG. 12, the pixel values in the captured image are arrayed in a regular grid (left side of FIG. 12), whereas the corrected locations of the pixel values in the two-dimensional space after being shifted (right side of FIG. 12) are irregularly spaced. However, as can be seen in FIG. 13, when only one color plane is considered at a time (e.g., the G1 color plane in FIG. 13), that color plane's shifted pixel values do form a regular array, which is shifted relative to the array defined by the G1 grid nodes.

    [0069] Of course, it will be understood that the locations of the pixels being discussed above are locations in a hypothetical two-dimensional space, which corresponds to an image frame. In particular, the location of each pixel value output by the image sensor 100A-C in the grid is determined based on the physical location in the array 102 of the pixel that generated the pixel value. The location of the pixel value in the image frame may be determined, for example, based on the order in which the pixel values are output from the image sensor 100A-C. The absolute distance between each grid node in the hypothetical two-dimensional space may be arbitrarily set. Based on how the distance between grid nodes is defined, a relationship between the distance that the image sensor 100A-C moves during imaging (s.sub.i) and a distance that the pixel values should be shifted during correction may be defined. For example, if the image sensor 100A-C moves in real space by the vector s.sub.1 during the first phase, then the first phase pixel values may be shifted in the hypothetical two-dimensional space by the vector .Math.s.sub.1, where is a factor relating distances in real space to distances in the hypothetical two-dimensional space. may be arbitrarily set to any value that optimizes the motion correction processing. For example, may be determined experimentally for a given image sensor 100A-C by using different values and selecting the value that returns the best looking image. In addition to this, or in the alternative, known physical characteristics of the image sensor 100A-C and the lens, as well as known relationships between these characteristics and resulting images may be used to determine a mathematically.

    [0070] After shifting the locations of the pixel values to the corrected locations based on the motion information (blocks 1125 through 1145), the process proceeds to blocks 1150 through 1175 to generate corrected pixel values for each node of the grid by interpolating the pixel values as shifted.

    [0071] In particular, in block 1150, the index i is reinitialized, and then in block 1160 a corrected pixel value is generated for each node of the grid that corresponds to the i.sup.th phase by interpolation from at least one of the pixel values in its shifted location. This process is then repeated for each phase (block 1175) until all the phases have been considered (1170), whereupon the process ends. In the exemplary process of block 1160, the i.sup.th phase corrected pixel values are generated using only i.sup.th phase pixel values, which is particularly well suited to color image sensors 100A-C in which each phase corresponds to a different color of pixel. For example, as shown in FIG. 13, a corrected pixel value may be generated for each of the nodes of the grid that corresponds to the G1 phase (these nodes are labeled G1 in FIG. 13) by interpolating from the G1 phase pixel values in their shifted locations (these shifted locations are labeled G1 in FIG. 13). Similarly, a corrected pixel value is generated for each of the nodes of the grid corresponding to the G2 phase based on the G2 phase pixel values in their shifted locations, and a corrected pixel value is generated for each of the nodes of the grid corresponding to the B phase based on the B phase pixel values in their shifted locations. However, it will be understood that a corrected pixel value for a node corresponding to a given phase may also be generated by interpolating from pixel values that correspond to a different phase, which may be especially well suited to the case of a monochrome image sensor 100A-C.

    [0072] The motion corrected pixel values for each node of the grid can be determined by any interpolation strategy. For example, the corrected pixel value may simply be set to equal the nearest neighboring shifted pixel valuethis may be the nearest shifted pixel value corresponding to the same phase as the corrected pixel value, or it may be the nearest shifted pixel value of any phase, depending on the desired effect. As another example, the corrected pixel value may be set to equal a weighted average of the several nearest neighboring shifted pixel valuesagain, these may be the nearest neighboring shifted pixel values that correspond to the same phase as the corrected pixel value, or these may be the nearest neighboring shifted pixel values of any phase, depending on the desired effect. As another example, a bi-linear interpolation method may be used. For example, in FIG. 13 the G1 corrected pixel value for the grid node 1202 may be obtained using bi-linear interpolation from the G1 pixel values located at the shifted positions 1212, 1214, 1216, and 1218, which are the four nearest G1 pixel values in the neighborhood of the grid node 1202. The bi-linear interpolation is performed using the formula:


    G1=G1.sub.+(1)G1.sub.b+(1)G1.sub.c+(1)(1)G1.sub.d(eq.1)

    where G1 is the corrected pixel value being generated, G1.sub.a is the pixel value of the upper left nearest neighbor, G1.sub.b is the pixel value of the upper right nearest neighbor, G1.sub.c is the pixel value of the lower left nearest neighbor, G1.sub.d is the pixel value of the lower right nearest neighbor, =q/(p+q), =s/(r+s), and r, s, p, and q are the distances shown in FIG. 14. Corrected pixel values for grid nodes corresponding to G2 or B can also be obtained using the same formula, but using neighboring G2 or B pixel values, respectively, instead of neighboring G1 pixel values, and using values and that correspond to the respective shifted positions of G2 and B.

    [0073] Thus, by the process described above, a corrected pixel value is generated for each node of the grid, based on the pixel values whose locations have been shifted to corrected locations based on the motion vectors. These corrected pixel values may then be output as a motion corrected image frame in the same raw data format in which the pixel values were originally output by the image sensor 100A-C. FIGS. 15(a) and (b) show image examples of an image taken by an image sensor 100A-C with a block ADC architecture in which P=Q=2. In FIG. 15(a), the motion correction processing was not performed, and color fringe artifacts can be observed in the image. FIG. 15(b) shows the same image with motion correction performed in the raw domain by the process described above, and significant improvement in the image quality is clearly achieved.

    [0074] Although the examples discussed above focus on the case in which there are four phases, this is not the only possible configuration. For example, FIG. 16 illustrates a PGS architecture in which P=Q=4, and thus there are 16 phases. Motion vectors s.sub.i for these 16 phases are illustrated in FIG. 17. In such an example, the pixels of a given color are not all sampled during the same phasefor example, some of the R pixels are sampled in the 0.sup.th phase, some are sampled in the 2.sup.nd phase, some are sampled in the 8.sup.th phase, and some are sampled in the 10.sup.th phase. Because the motion vectors for these different phases are not necessarily the same, the pixel values of a given color plane are not in a regular grid after being shifted, in contrast to the case of a P=Q=2 image sensor 100A-C. For example, in FIGS. 18 and 19 it can be seen that the location of the second R pixel value is shifted relative to the position of the first R value due to differences between the motion vector s.sub.i (which is zero) and the motion vector s.sub.2 (which is non-zero). In addition, in a monochrome image sensor 100A-C there is only a single color plane, and in this case the shifted pixel values are also in an irregular array. In such a case, the bi-linear interpolation method described above cannot be applied to determine a corrected pixel value for a grid node corresponding to a given color plane, because the pixel values of that color plane after shifting are no longer in a regular grid.

    [0075] However, other methods of interpolation can be used besides the bi-linear interpolation method in the case in which the shifted pixel values do not form a regular grid. In particular, known methods for re-sampling an array of data from an irregularly sampled grid to a regularly sampled grid may be used to generate the corrected pixel valuesin the case of a color image sensor 100A-C, this may be done on a color-plane by color-plane basis, while in a monochrome image sensor 100A-C this may be done considering all of the pixel values together. For example, FIG. 19 shows the shifted locations of the R pixel values based on the motion vectors s.sub.0, s.sub.2, s.sub.8, and s.sub.10 from FIG. 17. The locations 1902, 1904, 1906, 1908 correspond to pixels exposed at phase t0, the locations 1912, 1914, 1916, 1918 correspond to pixels exposed at phase t2, the locations 1922, 1924 correspond to pixels exposed at phase t10, and the locations 1932, 1934 correspond to pixels exposed at phase t8. The corrected pixel values at the grid nodes corresponding to the R color plane, such as the nodes 1952, 1954, 1956, 1958, . . . etc., can be obtained by interpolation using the R pixel values at the shifted locations such as 1902, 1904, 1912, 1914, 1906, 1922, . . . etc. Interpolation methods such as nearest neighbor interpolation, sinc interpolation, distance weighted interpolation, frequency domain interpolation, etc., can be used.

    [0076] The motion-corrected pixel data output from the motion correction circuitry 702 may be fed into the image signal processor 110, where additional image processing procedures such as demosaicing, noise removal, lens artifact removal, sharpening, color correction, etc., are performed. Because the motion-corrected pixel data is in the same raw data format as the input pixel data, the subsequent stage image processing can be performed in the usual manner without any adjustment being required.

    [0077] Although the present invention has been described in considerable detail with reference to certain embodiments thereof, the invention may be variously embodied without departing from the spirit or scope of the invention. Therefore, the following claims should not be limited to the description of the embodiments contained herein in any way.