Imaging apparatus, imaging method and program, and reproduction apparatus
10003745 ยท 2018-06-19
Assignee
Inventors
Cpc classification
H04N25/533
ELECTRICITY
H04N25/77
ELECTRICITY
G03B7/00
PHYSICS
H04N23/6842
ELECTRICITY
International classification
Abstract
The present disclosure relates to an imaging apparatus, an imaging method and a program, and a reproduction apparatus according to which an image sensor is divided into a plurality of areas and exposure control may be performed for each area according to the amount of camera shake. An imaging apparatus which is an aspect of the present disclosure includes an image sensor unit for generating pixel data of a pixel constituting a frame by photoelectric conversion, a calculation unit for calculating an amount of camera shake in each of areas obtained by dividing the frame, and a control unit for controlling exposure time of the image sensor for each of the areas according to the amount of camera shake calculated for each of the areas. The present disclosure is applicable to an electronic device such as a camera including an area ADC type image sensor, for example.
Claims
1. An imaging apparatus comprising: an image sensor configured to generate pixel data of a pixel that constitutes a frame by photoelectric conversion; circuitry configured to: calculate an amount of camera shake in each of a plurality of areas obtained by division of the frame; and control exposure time of the image sensor for each of the plurality of areas according to the amount of camera shake calculated for each of the plurality of areas.
2. The imaging apparatus according to claim 1, wherein the circuitry is further configured to control a pixel summing method of the image sensor for each of the plurality of areas according to the amount of camera shake calculated for each of the plurality of areas.
3. The imaging apparatus according to claim 2, wherein the circuitry is further configured to execute exposure and pixel readout of the image sensor at least one time during one frame period for each of the plurality of areas according to the amount of camera shake calculated for each of the plurality of areas.
4. The imaging apparatus according to claim 2, wherein the circuitry is further configured to: detect at least one of angular velocity or acceleration; and calculate the amount of camera shake in each of the plurality of areas based on the at least one of the angular velocity or the acceleration.
5. The imaging apparatus according to claim 2, wherein the circuitry is further configured to calculate the amount of camera shake in each of the plurality of areas based on a variance of pixel data of a plurality of pixels sampled for each of the plurality of areas.
6. The imaging apparatus according to claim 2, wherein the image sensor includes, for each of a plurality of regions where a plurality of pixels are arranged, one analog digital converter (ADC) that is shared by the plurality of pixels, and wherein each of the plurality of areas corresponds to one of the plurality of regions where the plurality of pixels that share the one ADC is arranged.
7. The imaging apparatus according to claim 2, wherein the circuitry is further configured to packetize, for each of the plurality of areas, the pixel data that is read out from the image sensor to generate image data.
8. The imaging apparatus according to claim 7, wherein the circuitry is further configured to store, in a same packet, the pixel data that is read out from a same area, and describe an area number indicating a position of an area of the plurality of areas in the frame in the same packet.
9. The imaging apparatus according to claim 8, wherein the circuitry is further configured to describe, in the same packet, at least one of a frame number indicating a place in a chronological order of the frame to which the area belongs, a subframe number indicating a time of execution of exposure and readout for the area in one frame period, and a summation pattern indicating the pixel summing method for the area.
10. The imaging apparatus according to claim 7, wherein the circuitry is further configured to resynthesize the frame based on the image data in which the pixel data is packetized on a per-area basis.
11. An imaging method, comprising: in an imaging apparatus including an image sensor configured to generate pixel data of a pixel constituting a frame by photoelectric conversion: calculating an amount of camera shake in each of a plurality of areas obtained by dividing the frame; and controlling exposure time of the image sensor for each of the plurality of areas according to the amount of camera shake calculated for each of the plurality of areas.
12. A non-transitory computer-readable medium having stored thereon, computer-executable instructions, which when executed by a processor of an imaging apparatus, cause the processor to execute operations, the operations comprising: in the imaging apparatus including an image sensor configured to generate pixel data of a pixel constituting a frame by photoelectric conversion: calculating an amount of camera shake in each of a plurality of areas obtained by dividing the frame; and controlling exposure time of the image sensor for each of the plurality of areas according to the amount of camera shake calculated for each of the plurality of areas.
13. A reproduction apparatus, comprising: first circuitry configured to: acquire image data in units of a packet, wherein the image data is output from an imaging apparatus, the imaging apparatus including an image sensor that generates pixel data of a pixel of a frame by photoelectric conversion, and second circuitry that calculates an amount of camera shake in each of a plurality of areas obtained by division of the frame, controls exposure time of the image sensor for each of the plurality of areas according to the amount of camera shake calculated for each of the plurality of areas, and packetizes, for each of the plurality of areas, the pixel data that is read out from the image sensor to generate the image data; analyze the packet of the acquired image data, and restore the frame based on each area of the plurality of areas corresponding to the packet; and reproduce the image data.
14. The reproduction apparatus according to claim 13, wherein the packet of the image data stores the pixel data that is read out from a same area of the plurality of areas, and wherein an area number indicating a position of an area in the frame is described in the packet.
15. The reproduction apparatus according to claim 14, wherein at least one of a frame number indicating a place in a chronological order of the frame to which the area belongs, a subframe number indicating a time of execution of exposure and readout for the area in one frame period, and a summation pattern indicating a pixel summing method for the area is further described in the packet of the image data.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
MODE FOR CARRYING OUT THE INVENTION
(11) Hereinafter, a preferred embodiment for carrying out the present disclosure (hereinafter referred to as an embodiment) will be described in detail with reference to the drawings.
(12) <Example Configuration of Imaging Apparatus as Present Embodiment>
(13)
(14) The imaging apparatus 10 includes an imaging unit 11, an angular velocity sensor unit 12, a motion information acquisition unit 13, a motion amount calculation unit 14, an exposure/readout control unit 15, a data output unit 16, and a reproduction unit 17.
(15) The imaging unit 11 performs capturing at a predetermined frame rate according to control information from the exposure/readout control unit 15, and supplies pixel data that is obtained as a result to the data output unit 16. Additionally, at the time of capturing, the exposure time and the pixel summing method may be changed for each of a plurality of areas obtained by dividing a frame.
(16) Furthermore, the imaging unit 11 supplies the generated pixel data in response to a request from the motion information acquisition unit 13. However, it is not necessary to supply pixel data of the entire frame to the motion information acquisition unit 13, and it is sufficient if only N pixels in each of a plurality of areas obtained by dividing the frame are exposed several times during a one frame period (for example, 1/30 s when the frame rate is 30 fps), and pixel data is read out and supplied to the motion information acquisition unit 13 at each exposure.
(17) The angular velocity sensor unit 12 detects the motion of the imaging apparatus 10 during capturing by detecting, at predetermined sampling periods, angular velocity around each of three axes (x-axis, y-axis, z-axis) that pass through center coordinates of the frame, that is, the center of an image sensor 32, and that are orthogonal to one another. Additionally, an accelerometer for detecting acceleration in directions of three axes may be mounted in addition to the angular velocity sensor unit 12 or instead of the angular velocity sensor unit 12.
(18) The motion information acquisition unit 13 performs at least one of a process for acquiring pixel data from the imaging unit 11 and a process for acquiring angular velocity detected by the angular velocity sensor unit 12, and supplies at least one of the acquired pixel data or angular velocity to the motion amount calculation unit 14 as motion information.
(19) The motion amount calculation unit 14 calculates the amount of camera shake in each area on the frame on the basis of the motion information from the motion information acquisition unit 13, and notifies the exposure/readout control unit 15 of the amount. Additionally, details of calculation of the amount of camera shake based on the motion information will be given later.
(20) The exposure/readout control unit 15 determines the exposure time and the pixel summing method for each area on the basis of the amount of camera shake in each area of the frame supplied by the motion amount calculation unit 14, and notifies the imaging unit 11 of the above as control information. Here, as the pixel summing method, horizontal direction ?, horizontal direction ?, vertical direction ? or the like may be selected. Additionally, control may be performed such that pixels are simply thinned and read out, instead of pixel summing.
(21) Moreover, the exposure/readout control unit 15 also notifies the data output unit 16 of the exposure time and the pixel summing method for each area which have been determined. Furthermore, the exposure/readout control unit 15 notifies the motion amount calculation unit 14 of the elapsed time of exposure based on the determined exposure time for each area.
(22) The data output unit 16 generates image data by packetizing the pixel data input from the imaging unit 11 for each area of the frame. Details of the data structure (hereinafter referred to as a pixel data format) of a packet storing the pixel data will be given later.
(23) The data output unit 16 outputs the generated image data to the reproduction unit 17 on a per-packet basis. Additionally, the order of output of packets of the image data from the data output unit 16 to the reproduction unit 17 is arbitrary, and does not necessarily have to follow a predetermined order (for example, in the order of raster scanning in units of an area).
(24) The reproduction unit 17 is configured from a data acquisition unit 18, an image re-synthesis unit 19, and an image display unit 20.
(25) The data acquisition unit 18 acquires, and temporarily holds, a packet of image data that is output from the data output unit 16, analyzes the area number, the frame number and the like stored in the packet, and notifies the image re-synthesis unit 19 of the analysis result. Furthermore, the data acquisition unit 18 supplies the packet of image data that is held to the image re-synthesis unit 19 in response to a request from the image re-synthesis unit 19.
(26) The image re-synthesis unit 19 re-synthesizes pixel data of the area stored in the packet supplied by the data acquisition unit 18 into image data on a per-frame basis, and outputs the data to the image display unit 20. The image display unit 20 displays the re-synthesized image data as an image.
(27) Additionally, the reproduction unit 17 may also exist as a reproduction apparatus separate from the imaging apparatus 10.
(28) <Detailed Example Configuration of Imaging Unit 11>
(29) Next, a detailed example configuration of the imaging unit 11 will be described with reference to
(30)
(31)
(32) However, the present disclosure is also applicable in a case where the area 44 and a region on the image sensor 32 where one ADC 42 is shared do not always coincide with each other.
(33) <Calculation of Amount of Camera Shake Based on Motion Information>
(34) Next, calculation of the amount of camera shake based on the motion information by the motion amount calculation unit 14 will be described.
(35) First, calculation of the amount of camera shake based on the angular velocity will be described.
(36) Additionally, in
(37) The motion amount calculation unit 14 calculates rotation angles (?.sub.x, ?.sub.y, ?.sub.z) by integrating the angular velocities around the three axes detected by the angular velocity sensor unit 12, and computes, according to the following formulae (1), an amount of camera shake (d.sub.x, d.sub.y) at the center coordinates (x, y) of each area at the timing of acquisition of the angular velocities by using the calculated rotation angles (?.sub.x, ?.sub.y, ?.sub.z), the focal point M and the length L.
(38)
(39) Next, calculation of the amount of camera shake based on variances of N pieces of pixel data acquired from each area will be described.
(40) Normally, the variance of pixel data in each area is determined by the noise at the image sensor 32 and the pattern indicating the feature of a subject, but in the case where there is occurrence of camera shake, the pattern of the subject is lost, and thus, the variance tends to be smaller compared to the case where there is no occurrence of camera shake. Accordingly, in the present embodiment, the variance of pixel data in each area is calculated, and presence/absence of camera shake and the amount of camera shake are determined based on the result of comparison between the variance and a predetermined threshold. Additionally, the threshold may be determined on the basis of a noise model for the image sensor 32 and ISO sensitivity settings.
(41) Specifically, first, N pixels are sampled from each area, and exposure is performed several times during one frame period T, and a plurality of pieces of pixel data for the same coordinates are acquired. Here, pixel data at a timing t.sub.j of an i-th pixel among the N pieces of pixel data is expressed by p.sub.i (t.sub.j). Next, the pixel data p.sub.i (t.sub.j) is accumulated according to the following formula (2) from timings t=1 to t=t.sub.j, and an accumulated pixel value p.sub.i,j is calculated.
(42)
(43) Next, an average value p.sub.j of the accumulated pixel value p.sub.i,j of the N pixels in the area is calculated according to the following formula (3).
(44)
(45) Next, according to the following formula (4), a variance v.sub.j of pixel data at the timing t=t.sub.j of the N pixels in the area is calculated.
(46)
(47) Lastly, the calculated variance v.sub.j and the predetermined threshold are compared, and if the variance v.sub.j is smaller than the predetermined threshold, occurrence of camera shake is determined, and the variance v.sub.j is further compared with a plurality of lower thresholds to determine the amount of camera shake.
(48) Additionally, in the case where the process for acquiring pixel data from the imaging unit 11 and the process for acquiring angular velocity detected by the angular velocity sensor unit 12 are performed by the motion information acquisition unit 13, the amount of camera shake may be calculated by the motion amount calculation unit 14 by preferentially using one of the above. Moreover, the amount of camera shake may be calculated from each one, and each calculation result may be multiplied by a predetermined weight coefficient, and then, addition may be performed.
(49) <Control of Exposure Time and Pixel Summing Method Based on Amount of Camera Shake in Each Area>
(50) Next, control of the exposure time and the pixel summing method based on the amount of camera shake in each area by the exposure/readout control unit 15 will be described.
(51)
(52)
(53) <Pixel Data Format>
(54) Next, the data structure (pixel data format) of a packet of image data that is output from the data output unit 16 will be described with reference to
(55)
(56) Additionally, the shape of an area does not necessarily has to be fixed, and may be dynamically changed.
(57)
(58) The packet 50 includes an area number region 51, a frame number region 52, a subframe number region 53, and a summation pattern region 54. Bit width is set for each of the area number region 51 to the summation pattern region 54, and thus, the data length from the area number region 51 to the summation pattern region 54 is a fixed length.
(59) The packet 50 includes a plurality of pixel data regions 55-1 to 55-n. Additionally, the number of pixels to be read out from an area may be changed depending on the pixel summing method (the number of summed pixels) that is set, and thus, the data length of the pixel data regions 55-1 to 55-n is a variable length.
(60) The area number region 51 stores the area number indicating the position of an area in the frame. The frame number region 52 stores the frame number indicating the place in the chronological order of a frame including the area corresponding to the packet 50. The subframe number region 53 stores the subframe number indicating in which turn the shooting was performed. The summation pattern region 54 stores information indicating the pattern of the pixel summing method that is set for the area corresponding to the packet 50. Additionally, the pattern of the pixel summing method also includes information for a case where the area shape was dynamically changed.
(61) The pixel data region 55 stores pixel data of a pixel that is read out from the area corresponding to the packet 50. For example, in the case where two pixels are read out from the corresponding area, pixel data regions 55-1 and 55-2 are provided, and pixel data after pixel summing is stored in each pixel data region. Also, for example, if no pixel is read out from the corresponding area, the pixel data region 55 is not provided. In this case, information that no pixel is read out from the corresponding area is stored in the summation pattern region 54 as the information indicating the pattern of the pixel summing method.
(62) The reproduction unit 17, which receives the packet 50, may specify the coordinates of the pixel in the pixel data stored in the pixel data region 55 on the basis of the area number, the frame number, the subframe number, and the pattern of the pixel summing method. Accordingly, even if the output order of the packets 50 is not in the order of raster scanning or the chronological order and is random, frames may be accurately restored at the reproduction unit 17.
(63) As shown in
(64) Additionally, in
(65) <Exposure Process on Per-Area Basis in One Frame Period>
(66) Next,
(67) The exposure process is performed for each of a plurality of areas forming a frame, in each one frame period.
(68) In step S1, the imaging unit 11 starts exposure on a current area unit 15. Additionally, the exposure time of exposure that is started here is a time that is uniformly set for all the areas on the basis of the brightness or the like of the entire frame.
(69) In step S2, the motion amount calculation unit 14 calculates the amount of camera shake at predetermined sampling periods for the current area on the basis of motion information from the motion information acquisition unit 13, and notifies the exposure/readout control unit 15 of the amounts. In step S3, the exposure/readout control unit 15 determines whether the total of the amounts of camera shake from the start of exposure is at or above a predetermined threshold. If it is determined here that the total of the amounts of camera shake from the start of exposure is at or above the predetermined threshold, the exposure is instantly stopped, and the process proceeds to step S4.
(70) In step S4, the imaging unit 11 ends exposure on the current area unit 15. In step S5, the imaging unit 11 reads out pixel data from the current area by the pixel summing method corresponding to the control information from the exposure/readout control unit 15, and outputs the data to the data output unit 16. Additionally, in the case where the pixel summing method for the area is not yet set, all the pixels in the area are read out.
(71) In step S6, the exposure/readout control unit 15 determines whether or not one frame period has passed from the start of the exposure process. In the case where one frame period has not passed, exposure is to be performed several times for the same area in one frame period, and the process returns to step S1 to be repeated. On the other hand, in the case where one frame period has passed, the exposure process is ended.
(72) Additionally, in the case where the total of the amounts of camera shake from the start of exposure is determined in step S3 to be smaller than the predetermined threshold, the process proceeds to step S7. In step S7, the exposure/readout control unit 15 determines whether or not the exposure time that is set has been reached from the start of exposure in step S1. If it is determined here that the exposure time is reached, the exposure is ended, and the process proceeds to step S4. On the other hand, if it is determined that the exposure time is not yet reached, the process proceeds to step S8.
(73) In step S8, the exposure/readout control unit 15 determines whether or not one frame period has passed from the start of the exposure process. In the case where one frame period has not passed, the process is returned to step S2 to be repeated while the exposure is being continued. On the other hand, in the case where one frame period has passed, the process proceeds to step S9.
(74) In step S9, the imaging unit 11 ends exposure on the current area according to control information from the exposure/readout control unit 15. In step S10, the exposure/readout control unit 15 determines the pixel summing method for the current area according to the total amount of camera shake in one frame period, and notifies the imaging unit 11 of the method as control information.
(75) In step S11, the imaging unit 11 reads out pixel data of the current area according to the control information (pixel summing method) from the exposure/readout control unit 15, and outputs the data to the data output unit 16. The exposure process is then ended.
(76) As described above, according to the exposure process that is performed on a per-area basis in one frame period, the exposure time may be secured for a current area up to the limit at which the amount of camera shake from the start of exposure reaches the predetermined threshold, and the SNR may be improved.
(77) Furthermore, exposure is immediately stopped if the total of the amounts of camera shake from the start of exposure is great (at or above the predetermined threshold), and the influence of camera shake on the area may be suppressed. Moreover, the pixel summing method of a current area is determined according to the total amount of camera shake in one frame period, and thus, if the camera shake is small, the SNR may be improved and the power may be saved while maintaining the resolution. On the other hand, if the camera shake is great, the resolution is reduced and the sensitivity is increased by increasing the number of pixels to be summed up. In the case where the sensitivity is increased, the exposure time may be reduced, and the influence of camera shake may be suppressed.
(78) <Re-Synthesis of Frame by Reproduction Unit 17>
(79) Next, re-synthesis of a frame at the reproduction unit 17 receiving packets of image data will be described.
(80) As described above, in the present embodiment, the number of pixels is small in a blurry area where the amount of camera shake is great, and even if pixels in an area are restored by simple linear interpolation, the frame is not greatly deteriorated as a whole.
(81) Accordingly, the reproduction unit 17 restores pixels in an area by a linear interpolation formula as indicated by the following formula (5).
(82)
(83) Additionally, in formula (5), x is pixel coordinates to be restored. Pixel coordinates where there is an output are expressed by u. A color filter identifier for x or u is expressed by c. The distance between coordinates x and u is expressed by d(x, u). A weighting function that is determined according to the distance is expressed by w(d). A collection of pixel coordinates where there are outputs from the area is expressed by S.
(84) By using formula (5), pixels in an area may be linearly interpolated by using a weighting function that is determined by the distance between a pixel to be restored and a pixel which has been actually output.
(85) Additionally, besides formula (5), sparse coding using a dictionary or sparse coding without a dictionary may be used on the basis of sparsity of the image in order to restore pixels in an area.
(86) Next, an example of sparse coding using a dictionary will be given by the following formula (6).
(87)
(88) Additionally, in formula (6), s is a vector of every pixel data output from the area. Learning data that is obtained in advance is expressed by D. A mask vector indicating the position of a pixel output from the area is expressed by ?. A vector of every restored pixel data in the area is expressed by p. Norm used in the sparse coding is expressed by ?, and 0,1 is often used.
(89) An example of sparse coding not using a dictionary will be given by the following formula (7).
(90)
(91) In formula (7), s is a vector of every pixel data output from the area. A mask matrix indicating the position of a pixel output from the area is expressed by A. A vector of every restored pixel data in the area is expressed by p. Norm used in the sparse coding is expressed by ?, and 0,1 is often used.
(92) The methods for re-synthesizing a frame from packets of image data are not limited to those based on formulae (5) to (7) described above, and any formula or method may be used.
(93) Additionally, in the case where a packet corresponding to an area constituting a frame is not supplied to the reproduction unit 17 before the reproduction time of the frame, pixel data is generated for all the pixels in the area by a predetermined error process. As the predetermined error process, a method of using the pixel data of the area at the same position but in the previous frame, or a method of painting all the pixels in an area where the corresponding packets are not received by the same pixel data by using the pixel data of an adjacent area in the same frame may be adopted, for example.
(94) Now, the series of processes described above may be performed by hardware, or by software. In the case of performing the series of processes by software, programs constituting the software are installed in a computer. The computer here may be a computer in which dedicated hardware is built, or a general-purpose personal computer that is capable of executing various functions by installing various programs, for example.
(95)
(96) In the computer, a central processing unit (CPU) 101, a read only memory (ROM) 102, and a random access memory (RAM) 103 are interconnected by a bus 104.
(97) An input/output interface 105 is further connected to the bus 104. An input unit 106, an output unit 107, a storage unit 108, a communication unit 109, and a drive 110 are connected to the input/output interface 105.
(98) The input unit 106 is configured from a keyboard, a mouse, a microphone, or the like. The output unit 107 is configured from a display, a speaker, or the like. The storage unit 108 is configured from a hard disk, a non-volatile memory, or the like. The communication unit 109 is configured by a network interface, or the like. The drive 110 drives a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like.
(99) A computer 100 configured in the above manner performs the series of processes described above by the CPU 101 loading a program that is stored in the storage unit 108 in the RAM 103 through the input/output interface 105 and the bus 104 and executing the program, for example.
(100) Additionally, the program to be executed by the computer 100 may be a program by which processes are performed in the chronological order described in the present specification, or a program by which processes are performed in parallel or at necessary timings, such as when invoked, for example.
(101) The embodiment of the present disclosure is not limited to the embodiment described above, and various changes may be made within the scope of the present disclosure.
(102) Additionally, the present disclosure may also adopt the following configurations.
(103) (1)
(104) An imaging apparatus including:
(105) an image sensor unit for generating pixel data of a pixel constituting a frame by photoelectric conversion;
(106) a calculation unit for calculating an amount of camera shake in each of areas obtained by dividing the frame; and
(107) a control unit for controlling exposure time of the image sensor for each of the areas according to the amount of camera shake calculated for each of the areas.
(108) (2)
(109) The imaging apparatus according to (1), wherein the control unit further controls a pixel summing method of the image sensor unit for each of the areas according to the amount of camera shake calculated for each of the areas.
(110) (3)
(111) The imaging apparatus according to (1) or (2), wherein the control unit causes exposure and pixel readout of the image sensor unit to be performed at least one time during one frame period for each of the areas according to the amount of camera shake calculated for each of the areas.
(112) (4)
(113) The imaging apparatus according to any of (1) to (3), further including a motion sensor unit for detecting at least one of angular velocity and acceleration,
(114) wherein the calculation unit calculates the amount of camera shake in each of the areas on the basis of at least one of the angular velocity and the acceleration detected.
(115) (5)
(116) The imaging apparatus according to any of (1) to (4), wherein the calculation unit calculates the amount of camera shake in each of the areas on the basis of a variance of pixel data of a plurality of pixels sampled for each of the areas.
(117) (6)
(118) The imaging apparatus according to any of (1) to (5),
(119) wherein the image sensor unit includes, for each of a plurality of regions where a plurality of pixels are arranged, one ADC that is shared by the plurality of pixels, and
(120) the area corresponds to the region where a plurality of pixels sharing one ADC are arranged.
(121) (7)
(122) The imaging apparatus according to any of (1) to (6), further including a data output unit for generating image data by packetizing, for each area, pixel data that is read out from the image sensor unit.
(123) (8)
(124) The imaging apparatus according to (7), wherein the data output unit stores, in a same packet, the pixel data that is read out from a same area, and describes an area number indicating a position of the area in a frame in the same packet.
(125) (9)
(126) The imaging apparatus according to (8), wherein the data output unit further describes, in the same packet, at least one of a frame number indicating a place in a chronological order of a frame to which the area belongs, a subframe number indicating a time of performance among several times, in a case where exposure and readout have been performed several times for the area in one frame period, and a summation pattern indicating a pixel summing method for the area.
(127) (10)
(128) The imaging apparatus according to any of (7) to (9), further including a re-synthesis unit for re-synthesizing the frame on the basis of the image data, generated by the data output unit, in which the pixel data is packetized on a per-area basis.
(129) (11)
(130) An imaging method to be performed by an imaging apparatus including an image sensor unit for generating pixel data of a pixel constituting a frame by photoelectric conversion, the method including:
(131) calculating an amount of camera shake in each of areas obtained by dividing the frame; and
(132) controlling exposure time of the image sensor for each of the areas according to the amount of camera shake calculated for each of the areas.
(133) (12)
(134) A program for controlling an imaging apparatus including an image sensor unit for generating pixel data of a pixel constituting a frame by photoelectric conversion, the program being for causing a computer of the imaging apparatus to perform:
(135) calculating an amount of camera shake in each of areas obtained by dividing the frame; and
(136) controlling exposure time of the image sensor for each of the areas according to the amount of camera shake calculated for each of the areas.
(137) (13)
(138) A reproduction apparatus for reproducing image data that is output from an imaging apparatus including
(139) an image sensor unit for generating pixel data of a pixel constituting a frame by photoelectric conversion,
(140) a calculation unit for calculating an amount of camera shake in each of areas obtained by dividing the frame,
(141) a control unit for controlling exposure time of the image sensor for each of the areas according to the amount of camera shake calculated for each of the areas, and
(142) a data output unit for generating the image data by packetizing, for each area, pixel data that is read out from the image sensor unit,
(143) the reproduction apparatus including:
(144) an acquisition unit for acquiring the image data in units of a packet; and
(145) a restoration unit for analyzing a packet of the acquired image data, and restoring the frame on the basis of each area corresponding to the packet.
(146) (14)
(147) The reproduction apparatus according to (13), wherein the packet of the image data stores the pixel data that is read out from a same area, and an area number indicating a position of the area in a frame is described in the packet.
(148) (15)
(149) The reproduction apparatus according to (14), wherein at least one of a frame number indicating a place in a chronological order of a frame to which the area belongs, a subframe number indicating a time of performance among several times, in a case where exposure and readout have been performed several times for the area in one frame period, and a summation pattern indicating a pixel summing method for the area is further described in the packet of the image data.
REFERENCE SIGNS LIST
(150) 10 Imaging apparatus 11 Imaging unit 12 Angular velocity sensor 13 Motion information acquisition unit 14 Motion amount calculation unit 15 Exposure/readout control unit 16 Data output unit 17 Reproduction unit 18 Data acquisition unit 19 Image re-synthesis unit 20 Image display unit 31 Optical lens unit 32 Image sensor 41 Pixel unit 42 ADC 43 ADC group 44 Area 50 Packet 100 Computer 101 CPU