Apparatus and method for referring to motion status of image capture device to generate stereo image pair to auto-stereoscopic display for stereo preview
09838672 · 2017-12-05
Assignee
Inventors
Cpc classification
H04N13/302
ELECTRICITY
International classification
Abstract
A stereo preview apparatus has an auto-stereoscopic display, an input interface, a motion detection circuit, and a visual transition circuit. The input interface receives at least an input stereo image pair including a left-view image and a right-view image generated from an image capture device. The motion detection circuit evaluates a motion status of the image capture device. The visual transition circuit generates an output stereo image pair based on the input stereo image pair, and outputs the output stereo image pair to the auto-stereoscopic display for stereo preview, wherein the visual transition circuit refers to the evaluated motion status to configure adjustment made to the input stereo image pair when generating the output stereo image pair.
Claims
1. A stereo preview apparatus, comprising: an auto-stereoscopic display; an input interface, configured to receive at least an input stereo image pair including a left-view image and a right-view image generated from an image capture device; a motion detection circuit, configured to evaluate a motion status of the image capture device; and a visual transition circuit, configured to generate an output stereo image pair from the input stereo image pair, and output the output stereo image pair to the auto-stereoscopic display for stereo preview, wherein the visual transition circuit refers to an image synthesis parameteraset according to the evaluated motion status to configure adjustment made to the input stereo image pair when generating the output stereo image pair; wherein when a left-view output image included in the output stereo image pair is adjusted to be different from the left-view image included in the input stereo image pair, the left-view output image is a synthesized image I′(1−α) generated according to the left-view image I.sub.L and the right-view image I.sub.R included in the input stereo image pair according to the equation I′(1−α)=αI.sub.L+(1−α)I.sub.R, and when a right-view output image included in the output stereo image pair is adjusted to be different from the right-eye image included in the input stereo image pair, the right-view output image is a synthesized image I′(α) generated according to the right-view image I.sub.R and the left-view image I.sub.L included in the input stereo image pair according to the equation I′(α)=(1−α)I.sub.L+αI.sub.R.
2. The stereo preview apparatus of claim 1, wherein the stereo preview is displayed under a photo mode.
3. The stereo preview apparatus of claim 1, wherein the stereo preview is displayed under a video capture mode.
4. The stereo preview apparatus of claim 1, wherein the motion detection circuit comprises: a motion analysis unit, configured to receive an output of the image capture device, and perform a motion analysis operation upon the output of the image capture device to evaluate the motion status.
5. The stereo preview apparatus of claim 4, wherein the motion detection circuit further comprises: a motion sensor, configured to generate a sensor output to the motion analysis unit; wherein the sensor output is referenced by the motion analysis operation performed by the motion analysis unit.
6. The stereo preview apparatus of claim 1, wherein the adjustment is disparity adjustment.
7. The stereo preview apparatus of claim 6, wherein the visual transition circuit comprises: a disparity analysis unit, configured to estimate a disparity distribution possessed by the left-view image and the right-view image; an image synthesis control unit, configured to set the image synthesis parameter a according to the evaluated motion status and the estimated disparity distribution; and an image synthesis unit, configured to generate at least one synthesized image according to the left-view image, the right-view image and the at least one image synthesis parameter, wherein the output stereo image pair includes the at least one synthesized image.
8. The stereo preview apparatus of claim 7, wherein the at least one synthesized image includes a synthesized left-view image and a synthesized right-view image of the output stereo image pair.
9. The stereo preview apparatus of claim 7, wherein when the evaluated motion status indicates that the image capture device is intended to be still relative to a user, the image synthesis control unit adjusts the at least one image synthesis parameter to make the output stereo image pair approach the input stereo image pair.
10. The stereo preview apparatus of claim 7, wherein when the evaluated motion status indicates that the image capture device is not intended to be still relative to a user, the image synthesis control unit adjusts the at least one image synthesis parameter to make the output stereo image pair approach a zero-disparity image pair or have a disparity distribution fitted into a comfort zone specified by the auto-stereoscopic display.
11. A stereo preview method, comprising: receiving at least an input stereo image pair including a left-view image and a right-view image generated from an image capture device; evaluating a motion status of the image capture device; and generating an output stereo image pair from the input stereo image pair, and outputting the output stereo image pair to an auto-stereoscopic display for stereo preview, wherein the evaluated motion status is referenced to set an image synthesis parameterawhich is used to configure adjustment made to the input stereo image pair during generation of the output stereo image pair; wherein when a left-view output image included in the output stereo image pair is adjusted to be different from the left-view image included in the input stereo image pair, the left-view output image is a synthesized image I′(1−α) generated according to the left-view image I.sub.L and the right-view image I.sub.R included in the input stereo image pair according to the equation I′(1−α)=αI.sub.L+(1−α)I.sub.R, and when a right-view output image included in the output stereo image pair is adjusted to be different from the right-view image included in the input stereo image pair, the right-view output image is a synthesized image I′(α) generated according to the right-view image I.sub.R and the left-view image I.sub.L included in the input stereo image pair according to the equation I′(α)=(1−α)I.sub.L+αI.sub.R.
12. The stereo preview method of claim 11, wherein the stereo preview is displayed under a photo mode.
13. The stereo preview method of claim 11, wherein the stereo preview is displayed under a video capture mode.
14. The stereo preview method of claim 11, wherein the step of evaluating the motion status of the image capture device comprises: receiving an output of the image capture device; and performing a motion analysis operation upon the output of the image capture device to evaluate the motion status.
15. The stereo preview method of claim 14, wherein the step of evaluating the motion status of the image capture device further comprises: receiving a sensor output from a motion sensor; wherein the sensor output is referenced by the motion analysis operation.
16. The stereo preview method of claim 11, wherein the adjustment is disparity adjustment.
17. The stereo preview method of claim 16, wherein the step of generating the output stereo image pair based on the input stereo image pair comprises: estimating a disparity distribution possessed by the left-view image and the right-view image; setting the image synthesis parameteraaccording to the evaluated motion status and the estimated disparity distribution; and generating at least one synthesized image according to the left-view image, the right-view image and the at least one image synthesis parameter, wherein the output stereo image pair includes the at least one synthesized image.
18. The stereo preview method of claim 17, wherein the at least one synthesized image includes a synthesized left-view image and a synthesized right-view image of the output stereo image pair.
19. The stereo preview method of claim 17, wherein when the evaluated motion status indicates that the image capture device is intended to be still relative to a user, the at least one image synthesis parameter is adjusted to make the output stereo image pair approach the input stereo image pair.
20. The stereo preview method of claim 17, wherein when the evaluated motion status indicates that the image capture device is not intended to be still relative to a user, the at least one image synthesis parameter is adjusted to make the output stereo image pair approach a zero-disparity image pair or have a disparity distribution fitted into a comfort zone specified by the auto-stereoscopic display.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
(7) The key idea of the present invention is to exploit the nature of human visual system to reduce crosstalk and vergence-accommodation conflict resulting from the inherent characteristics of the auto-stereoscopic display and visual discomfort resulting from movement of the stereo camera. For example, we can take advantage of zero-disparity, lower contrast and smoothed image to avoid/mitigate the aforementioned problems. The user behavior of using the stereo camera is considered to generate an improved and more friendly graphical user interaction for stereo preview under a photo mode and a video capture mode. Further details of the proposed self-adapted stereo preview mechanism for three-dimensional (3D) photography on an auto-stereoscopic display are described as below.
(8)
(9) The input interface 104 is coupled between the preceding image capture device 101 and the following motion detection circuit 106 and visual transition circuit 108, and is configured to receive each input stereo image pair IMG_IN generated from the image capture device 101. Therefore, the output of the image capture device 101 is accessible to the motion detection circuit 106 and visual transition circuit 108 through the input interface 104. By way of example, but not limitation, the input interface 104 may be a Camera Serial Interface (CSI) standardized by a Mobile Industry Processor Interface (MIPI).
(10) The motion detection circuit 106 is configured to evaluate a motion status MS of the image capture device 101. In one exemplary design, the motion detection circuit 106 includes a motion analysis unit 112. The motion analysis unit 112 is configured to receive the output of the image capture device 101 through the input interface 104, and then perform a motion analysis operation upon the output of the image capture device 101 to evaluate the motion status MS. In other words, the motion analysis unit 112 employs an image processing based algorithm to analyze image contents of the output of the image capture device 101 when performing the motion analysis operation. Please refer to
(11) In another exemplary design, the motion detection circuit 106 includes the aforementioned motion analysis unit 112 and an optional motion sensor 114. The motion sensor 114 is configured to generate a sensor output S_OUT to the motion analysis unit 112. Thus, the sensor output S_OUT is referenced by the motion analysis operation performed by the motion analysis unit 112. More specifically, still objects captured by the image capture device 101 in motion may be erroneously regarded as moving objects due to relative movement of the image capture device 101. Hence, the sensor output S_OUT provides motion information which can be used by the motion analysis unit 112 to distinguish between moving objects and still objects within a scene captured by the image capture device 101 in motion. With the assistance of the motion sensor 114, the motion analysis unit 112 can produce a more accurate evaluation of the camera motion.
(12) The visual transition circuit 108 is configured to generate an output stereo image pair IMG_OUT based on the input stereo image pair IMG_IN, and output the output stereo image pair IMG_OUT to the auto-stereoscopic display 110 for stereo preview. The visual transition circuit 108 refers to the evaluated motion status MS to configure adjustment made to the input stereo image pair IMG_IN when generating the output stereo image pair IMG_OUT. In a preferred embodiment of the present invention, the adjustment made to the input stereo image pair IMG_IN is disparity adjustment used to avoid/mitigate crosstalk and vergence-accommodation conflict resulting from the auto-stereoscopic display 110 and visual discomfort resulting from camera motion. In this embodiment, the visual transition circuit 108 with disparity adjustment capability includes a disparity analysis unit 116, an image synthesis control unit 118 and an image synthesis unit 120. The disparity analysis unit 116 is configured to estimate a disparity distribution DD possessed by the left-view image I.sub.L and the right-view image I.sub.R of one input stereo image pair IMG_IN. The disparity analysis unit 116 may uses any existing method to perform the disparity analysis.
(13) For example, the disparity analysis unit 102 may employ one of a stereo matching algorithm, a feature point extraction and matching algorithm, and a region-based motion estimation algorithm to get the statistical analysis of the disparity distribution DD. Please refer to
(14) In general, the user feels most comfortable when the vergence is on the screen of the auto-stereoscopic display 110, i.e., a zero-disparity image pair is displayed on the auto-stereoscopic display 110. Besides, there is a comfort zone when an image pair with non-zero disparity is displayed on the auto-stereoscopic display 110. The comfort zone depends on the specification of the 3D display panel. For example, the auto-stereoscopic display 110 is a 3D display panel with a defined 3D vergence angle θ=typ(2D)±1°(3D), where typ(2D) is the typical 2D vergence angle represented by
(15)
B represents the distance between the right eye and the left eye, and d represents the distance between the panel and user's eyes. When the disparity distribution of the image pair is fitted into the comfort zone, the image pair with non-zero disparity can be perceived by the user within the defined 3D vergence angle θ of the auto-stereoscopic display 110. In this way, the user can have comfortable 3D perception. Further, user's eyes are less sensitive to the crosstalk when the image pair is to present a lower-contrast 3D image or a smoothed 3D image. Based on these inherent characteristics of human visual system, the present invention proposes using the image synthesis control unit 118 and the image synthesis unit 120 to dramatically reduce crosstalk, vergence-accommodation conflict and visual discomfort.
(16) The image synthesis control unit 118 is configured to set at least one image synthesis parameter α according to the evaluated motion status MS and the estimated disparity distribution DD. Regarding the image synthesis unit 120, it is configured to generate at least one synthesized image according to the left-view image I.sub.L, the right-view image I.sub.R and the at least one image synthesis parameter α, wherein the output stereo image pair IMG_OUT includes the at least one synthesized image.
(17) In one exemplary design, one of the left-view image I.sub.L and the right-view image I.sub.R remains intact, and the other of the left-view image I.sub.L and the right-view image I.sub.R is replaced by a synthesized image derived from the left-view image I.sub.L and the right-view image I.sub.R. For example, the output stereo image pair IMG_OUT includes the left-view image I.sub.L and the synthesized image I′(α) (which acts as a right-view image). By way of example, the one-view synthesis scheme may be implemented using view interpolation expressed by following equation.
I′(α)=I′(αu.sub.L+(1−α)u.sub.R,αv.sub.L+(1−α)v.sub.R)=(1−α)I.sub.L(u.sub.L,v.sub.L)+αI.sub.R(u.sub.R,v.sub.R) (1)
(18) In above equation (1), a pixel I.sub.L(u.sub.L,v.sub.L) with coordinate (u.sub.L,v.sub.L) in the left-view image I.sub.L and a pixel I.sub.R(u.sub.R,v.sub.R) with coordinate (u.sub.R,v.sub.R) in the right-view image I.sub.R are corresponding points, and blended to form a pixel I′(αu.sub.L+(1−α)u.sub.R,αv.sub.L+(1−α)v.sub.R) with coordinate (αu.sub.L+(1−α)u.sub.R,αv.sub.L+(1−α)v.sub.R) in the synthesized image I′.
(19) Alternatively, the one-view synthesis scheme may be implemented using photometric view interpolation expressed by following equation.
I′(α)=(1−α)I.sub.L+αI.sub.R (2)
(20) In above equation (2), a pixel I.sub.L(x,y) with coordinate (x,y) in the left-view image I.sub.L and a pixel I.sub.R(x,y) with the same coordinate (x,y) in the right-view image I.sub.R are blended to form a pixel I′(x,y) with the same coordinate (x,y) in the synthesized image I′.
(21) In another exemplary design, both of the left-view image I.sub.L and the right-view image I.sub.R are replaced by synthesized images each derived from the left-view image I.sub.L and the right-view image I.sub.R. For example, the output stereo image pair IMG_OUT includes one synthesized image I′(1−α) (which acts as a left-view image) and another synthesized image I′(α) (which acts as a right-view image).
(22) By way of example, the two-view synthesis scheme may be implemented using view interpolation expressed by following equations.
I′(α)=I′(αu.sub.L+(1−α)u.sub.R,αv.sub.L+(1−α)v.sub.R)=(1−α)I.sub.L(u.sub.L,v.sub.L)+αI.sub.R(U.sub.R,v.sub.R) (3)
I′(1−α)=I′((1−α)u.sub.L+αu.sub.R,(1−α)v.sub.L+αv.sub.R)=αI.sub.L(u.sub.L,v.sub.L)+(1−α)I.sub.R(u.sub.R,v.sub.R) (4)
(23) Alternatively, the two-view synthesis scheme may be implemented using photometric view interpolation expressed by following equations.
I′(α)=(1−α)I.sub.L+αI.sub.R (5)
I′(1−α)=αI.sub.L+(1−α)I.sub.R (6)
(24) In above equations (1) and (2), 0≦α≦1 and αεR; and in above equations (3)-(6), 0≦α≦0.5 and αεR. Thus, the disparity distribution of the output stereo image pair IMG_OUT can be adaptively adjusted by setting the image synthesis parameter α based on the disparity distribution DD and the motion status MS. More specifically, when the image synthesis parameter α is set by a smaller value, the output stereo image pair IMG_OUT would be more like a zero-disparity image pair for reduction of crosstalk and vergence-accommodation conflict; and when the image synthesis parameter α is set by a larger value, the output stereo image pair IMG_OUT would be more like the input stereo image pair IMG_IN for stronger 3D perception. In this embodiment, when the evaluated motion status MS indicates that the image capture device 101 is intended to be still relative to the user, the image synthesis control unit 118 adjusts the at least one image synthesis parameter α to make the output stereo image pair IMG_OUT approach the input stereo image pair IMG_IN; and when the evaluated motion status MS indicates that the image capture device 101 is not intended to be still relative to the user, the image synthesis control unit 118 adjusts the at least one image synthesis parameter α to make the output stereo image pair IMG_OUT approach a zero-disparity image pair or have a disparity distribution fitted into a comfort zone specified by the auto-stereoscopic display 110.
(25)
(26) Step 402: Receive at least one input stereo image pair including a left-view image and a right-view image generated from an image capture device (e.g., a stereo camera). Go to steps 404 and 406.
(27) Step 404: Estimate a disparity distribution of the at least one input stereo image pair. Go to step 408.
(28) Step 406: Evaluate a motion status of the image capture device.
(29) Step 408: Check if the evaluated motion status indicates that the image capture device is intended to be still relative to a user. If yes, go to step 410; otherwise, go to step 412.
(30) Step 410: Configure at least one image synthesis parameter to have a visual transition to approach the input stereo image pair for stronger 3D perception. Go to step 414.
(31) Step 412: Configure at least one image synthesis parameter to have a visual transition to approach a zero-disparity image pair for reduced crosstalk, vergence-accommodation conflict and visual discomfort.
(32) Step 414: Generate an output stereo image pair to an auto-stereoscopic display for stereo preview. The output stereo image pair may have one synthesized image or two synthesized images, depending upon actual design consideration.
(33) It should be noted that if a capture event is triggered by the user pressing a physical/virtual shutter button while the stereo preview is displayed on the auto-stereoscopic display, a stereo image pair corresponding to the capture event is stored as one captured stereo image output for the photo mode. In other words, the capture event is not triggered before the stereo preview is displayed. As a person skilled in the art can readily understand details of each step in
(34)
(35) Step 502: Receive a capture event.
(36) Step 504: Start video recording of an output of the image capture device.
(37) It should be noted that, after a capture event is triggered by the user pressing a physical/virtual shutter button, the stereo preview of each output stereo image pair IMG_OUT is displayed on the auto-stereoscopic display and a video recording operation of each input stereo image pair IMG_IN is started, simultaneously. The output stereo image pair IMG_OUT is not necessarily the same as the input stereo image pair IMG_IN. In other words, due to the fact that the capture event is triggered before the stereo preview is displayed, an input stereo image pair IMG_IN with an original disparity distribution is recorded while an output stereo image pair IMG_OUT with an adjusted disparity distribution is displayed for stereo preview. As a person skilled in the art can readily understand details of other steps in
(38) In summary, the present invention provides a novel graphical user interface (GUI) for stereo preview on a mobile device equipped with a stereo camera and an auto-stereoscopic display. More specifically, based on the motion status of the stereo camera, a self-adapted stereo preview for 3D photography on an auto-stereoscopic display under a photo mode or a video recording mode is provided, such that the user can perceive a more comfortable stereo preview while the stereo camera is moving.
(39) Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.