Method and apparatus for creating an adaptive bayer pattern
11656722 · 2023-05-23
Assignee
Inventors
Cpc classification
G06F3/0425
PHYSICS
G06F3/017
PHYSICS
G06F3/0488
PHYSICS
G06T3/4053
PHYSICS
International classification
G06F3/0488
PHYSICS
G06T3/40
PHYSICS
Abstract
A method and apparatus for creating an adaptive mosaic pixel-wise virtual Bayer pattern. The method may include receiving a plurality of monochromatic images from an array of imaging elements, creating a reference ordered set at infinity from the plurality of monochromatic images, running a demosaicing process on the reference ordered set, and creating a color image from the demosaiced ordered set. One or more offset artifacts resulting from the demosaicing process may be computed at a distance other than infinity, the ordered set may be modified in accordance with the computed offsets.
Claims
1. A method for enabling gesture recognition of a gesture command on a mobile device, comprising: receiving a plurality of images of an object to be used for performing a gesture command from an array of imaging elements included within a camera of the mobile device; aligning the plurality of images; creating an ordered set at infinity comprising Red, Clear, Blue (R, C, B) pixel elements from the plurality of images; defining as an infinity image a low-resolution G image generated by subtracting the R, C, B pixel data of the plurality of images; performing a demosiacing process on the ordered set at infinity; computing offsets between the images from one or more determined artifacts generated from imaging the object performing the gesture command; modifying the ordered set to adapt to the computed offsets; regenerating the demosaiced image and computing an associated depth thereof; and interpreting the gesture command performed by the object.
2. The method of claim 1, wherein the gesture recognition is updated by matching row-wise and column wise disparity values.
3. The method of claim 1, further comprising: realigning the reference ordered set such that objects that are not at infinity are subtended by a modified Bayer pattern for better image quality; creating a second pattern based upon the modified Bayer pattern; and generating a second demosaiced image in accordance with the second pattern.
4. The method of claim 3, further comprising generating a depth map in accordance with the second pattern.
5. The method of claim 1, wherein a hand of a user is captured in the image as the object performing the gesture command.
6. The method of claim 5, wherein the hand of the user is used to control the mobile device.
7. The method of claim 6, wherein the mobile device comprises a camera.
8. The method of claim 7, wherein the hand of the user is positioned within a field of view of the camera.
9. The method of claim 7, wherein the array of imaging elements comprise a backward facing camera and the hand of the user is positioned within a field of view of the backward facing camera.
10. The method of claim 9, wherein a pinch gesture by the hand of the user positioned within the field of view of the backward facing camera adjusts zoom on the display of one or more images captured by a front facing camera of the mobile device without requiring contact with the display of the camera.
11. The method of claim 9, wherein a thumbs up gesture by the hand of the user positioned within the field of view of the backward facing camera results in the taking of a picture captured by a front facing camera of the mobile device.
12. The method of claim 9, wherein movement of the hand of the user positioned within the field of view of the backward facing camera towards and away from the display of the camera adjusts zoom of the front facing camera of the mobile device.
13. The method of claim 6, wherein the one or more mobile devices comprise one or more electronic devices.
14. The method of claim 13, wherein the one or more electronic devices are controlled by a gestural interface, each gesture being determined based upon one or more images acquired by the array of imaging elements comprising a camera of the electronic device for acquiring the gesture command.
15. The method of claim 13, wherein the one or more electronic devices comprises a television.
16. The method of claim 13, wherein the one or more devices comprises a game console.
17. The method of claim 1, wherein the step of computing one or more offset artifacts is performed only on pixels in the reference image that have changed from a prior image.
18. The method of claim 1, further comprising the steps of: highlighting on a touchscreen, by the user, one or more regions corresponding to one or more objects in the demosaiced image for which a distance to the mobile device from the object is to be determined; extracting the coordinates of the highlighted region from one or more of the demosaiced image and the received plurality of images; segmenting the region in the one of the received plurality of images and the demosaiced image from which the coordinates were extracted, determining a distance to the one or more objects corresponding to the highlighted region on the display; determining one or more dimensions of the one or more objects corresponding to the highlighted region in accordance with the demosaicing process.
19. A system for enabling gesture recognition of a gesture command on a mobile device, comprising: an array of imaging elements comprising a camera of a mobile device for acquiring a plurality of images; and a processor for: receiving a plurality of images of an object to be used for performing a gesture command from an array of imaging elements included within the camera of the mobile device; aligning the plurality of images; creating an ordered set at infinity comprising Red, Clear, Blue (R, C, B) pixel elements from the plurality of images; defining as an infinity image a low resolution G image generated by subtracting the R, C, B pixel data of the plurality of images; performing a demosiacing process on the ordered set at infinity; amplifying a response of blue and red pixel level elements based upon the C pixel data; computing offsets between the images from one or more determined artifacts generated from imaging the object performing a gesture command; modifying the ordered set to adapt to the computed offsets; regenerating the demosaiced image; and interpreting the gesture command performed by the object.
20. A non-transitory storage medium having a computer program stored thereon, the computer program causing a general purpose computer to perform the steps of: receiving a plurality of monochromatic images of an object to be used for performing a gesture command from an array of imaging elements included within a camera of the mobile device; aligning the plurality of images; creating an ordered set at infinity comprising Red, Clear, Blue (R, C, B) pixel elements from the plurality of images; defining as an infinity image a low resolution G image generated by subtracting the R, C, B pixel data of the plurality of images; performing a demosiacing process on the ordered set at infinity; amplifying a response of blue and red pixel level elements based upon the C pixel data; computing offsets between the images from one or more determined artifacts generated from imaging the object performing a gesture command; modifying the ordered set to adapt to the computed offsets; regenerating the demosaiced image; and interpreting the gesture command performed by the object.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) This patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
(2) For a more complete understanding of the invention, reference is made to the following description and accompanying drawings, in which:
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(22) One or more embodiments of the invention will now be described, making reference to the following drawings in which like reference numbers indicate like structure between the drawings.
(23) The inventors of the present invention have determined that using imaging data from the noted image sensor arrays of
(24) 1. If one looks from far enough away in a field-of-view of an array camera or of a microlens array sensor such as the one described in accordance with the various embodiments of this invention, then all of the pixel elements from any of the sensor arrays will view the same scene. In other words, at infinity, all the pixel-element data from the various sensors subtend the same set of locations in the scene. This allows some practical conclusions that define an infinity distance, and that allow for setting up a reference frame for use in determining parallax and depth information in a scene.
(25) 2. Providing a set of data that is subdivided into four monochromatic images also helps reduce cross talk between the channels and provides cleaner pixels.
(26) 3. While there have been many super resolution approaches to resolving the problem of depth as well as the convergence of a Bayer pattern with depth to produce an acceptable demosaicing result, most of the approaches dramatically lower resolution of the final data set, i.e. relative the total number of starting pixels, and are computationally very taxing.
(27) 4. If a Bayer subimage pattern is used for generation of monochrome subimages, then an ordered set of pixels can be generated to represent the background image, by transforming the Bayer pattern into an ordered set at the pixel level of interest.
(28) 5. Defining a set of four monochromatic images, comprised of primary colors, like, for instance two green, one blue and one red, creates that ordered set that will be used as a composite pixel-level Bayer pattern in accordance with the various embodiments of the invention.
(29) 6. Lining up the green images for disparity offsets effectively helps in lining up the red and blue images as well, since they are epipolar with one of the two existing images.
(30) 7. If one starts looking at objects that are a located a little closer to the sensor as compared to the infinity distance produced image, the demosaicing process produces artifacts that are apparent and easy to extract.
(31) 8. Re-aligning the pixel-wise synthetic Bayer pattern can help get rid of such artifacts and aid in producing a crisp image.
(32) 9. One can then define a new ordered set that varies row-wise, and column-wise, comprised of subsets of the four monochromatic images.
(33) 10. If one follows these observations, then one is in no need of a high-resolution coordinate grid, as defined in prevalent SR and image restoration techniques. In fact, such a grid becomes cumbersome to build and maintain.
(34) The various embodiments of the present invention also present gesture functionality with a mobile device using such a camera or a pair of front-facing cameras. The device can be a smartphone or a tablet. The user is able to use gestures to enable the device with multiple zoom functionality. As an example, in a camera application, the user is able to use a three-dimensional pinch gesture to zoom, or a thumbs-up gesture to snap a picture. A number of applications are presented in this work as well as illustrations for such applications. Novel three-dimensional eye tracking with multiple degrees of freedom is also proposed.
(35) Therefore, in accordance with the elements determined by the inventors of the present invention noted above, a method and system for generating one or more adaptive Bayer patterns from one or more monochrome images is provided.
(36) Given a set of monochromatic images that are light sensitive to different components of the wavelength, a well-ordered set of pixel-element patterns that minimizes the error between the current constructed set, and another set generated while viewing the scene through the same complex sensor pattern at infinity is generated in accordance with one or more embodiments of the present invention. Specifically, one can generate a set of patterns that mitigate that error, and if so, use this information to determine depth information to be employed in an adaptive demosaicing process. In order to employ the inventive system and method, one must first construct a reference image. As noted above, for construction of the reference image, as well as during later use of the system to generate depth information, four different images are preferably obtained from a micro lens (or other camera) array.
(37) Referring next to
(38) This step is then followed by the process of modifying the ordered set to adapt to the computed offsets in step 460, and is achieved by segmentation/disparity decomposition, in which regions in each of the monochrome images are first segmented, and then their disparities are computed. Temporal stability analysis as well as other salient features extraction is then attempted. Once the disparity that is associated with every pixel in the scene is known, one can then modify the initial infinity-based ordered set to produce a new demosaiced ordered set at step 470, based on the disparity computation. This new ordered set will then constitute a final version of the image for that particular frame. The whole process is then attempted again for each frame at step 480. This is desirable, since objects move in the field-of-view, and hence, it is important to take that aspect into account in an adaptive demosaicing Bayer pattern.
(39) Details of each of the steps noted above in
(40) Generating the Reference Image
(41) As noted with reference to
I.sub.∞(x,y,∞)={I.sub.R(x,y),I.sub.g.sub.
(42) Once this ordered set is generated then demosaicing can be computed and the individual R, G, and B channel images may then be generated.
(43) Consider then, this generated image at infinity, an example of which is depicted in
(44) In a way, this reference background image is one of a set of images whose dimensions approximately equate to that of the four images, put together, i.e. 4x the original dimensions. This image also represents the point beyond which all images look fairly identical to each other, once demosaicing takes place after sufficient white balancing between the different images. One is then able to generate an ordered set at infinity; more importantly, one has the ability to use this image as a reference image, since it represents the ordered set at infinity, hence every other image taken at other than infinity will have components offset from it. Once demosaicing is performed on the infinity-ordered set, every other image that is generated at other than infinity henceforth will have depth-related artifacts resulting from any subsequent demosaicing processes that take place.
(45) Identifying Depth from Demosaicing Artifacts
(46) If a reference image, I, is well-ordered and clearly generated in RGB at infinity, as shown in
(47) Defining a “Discernible” Depth
(48) In accordance with the one or more embodiments of the invention, the inventors have determined that an artifact is generated when object boundaries don't line up between two or more of the four component images when these images are taken at a depth other than infinity. For instance, referring next to
(49)
(50) Such artifacts inherently point to depth discontinuities, as well as misalignments between the various monochrome images during demosaicing. By measuring the magnitude of the misalignment, in accordance with the various embodiments of the invention, it is possible to measure disparity between the reference image and another image generated from monochromatic images taken at a depth other than at infinity. The monochromatic, green, lining for instance around the fingers visually encodes depth, making it discernable, and relatively easy to extract, and hence also easy to extract depth information therefrom.
(51) Since this image represents the set of a perfectly aligned Bayer pattern, it can be used to extract a perfectly demosaiced image with the three primary color channels.
(52) Encoded Artifacts from Depth-Misaligned Image Sets
(53) As objects present in images taken at a depth other than infinity violate the infinity reference image, and its associated criteria, a new set is preferably used, comprised of modified, row-wise pixel offsets, to adjust the demosaicing algorithm. These artifacts have certain advantages, allowing for the further features of the present invention to be utilized:
(54) 1. As noted above with respect to
(55) 2. These artifacts are clearly delineated, and can easily be extracted, since they characteristically are comprised of monochromatic color offsets.
(56) 3. The width and height of these artifacts indicate disparity values in the horizontal and vertical directions respectively.
(57) 4. Building on this last point, being able to discern depth in multiple dimensions is another advantage of this lens configuration.
(58) However, the main difference between prior art super resolution techniques in existence and the embodiments of the present invention is that the present invention does not strive to “fill in the blanks” of missing data. Instead, it is assumed that there exists a set of locations for every (x,y) value in the image, as will be described below in greater detail.
(59) Adaptive Demosaicing and Generation of a Synthetic Bayer Pattern
(60) The process for adaptive demosaicing will now be described in greater depth, in accordance with the various embodiments of the present invention. Given four
(61)
monochromatic source images, an l.sub.1×l.sub.2 set of images can be generated with a resolution of approximately (n×m, approximately four-times the resolution of the original set of images. Where l.sub.1 represents the total set of horizontal disparity images, and l.sub.2 represents the total set of vertical disparity images. A total of approximately (l.sub.1×l.sub.2×n) pixels is generated.
(62) This is accomplished by first creating a single n×m ordered set at infinity, as described above with respect to
(63) Consider the set, S representing the entire set of demosaiced images produced at various depths. One can represent S as the union of all of these candidate images' pixels, such that:
S={I.sub.R1,R2(x,y)∪I.sub.1,R2(x,y) . . . ∪I.sub.l.sub.
(64) Visualizing the Set, S
(65) By combining horizontal and vertical displacements, one can see how combinations of shifts within an image can create intermediate versions of the images presented above, by aligning the various disparities across all four images. However, all of these images are presented in the set above. Provided the images are aligned row-wise, one can visualize S in three dimensions, such that multiple pixels occupy the same location, by corresponding to different shifts in the monochromatic sensors.
(66) Note on the inventive Demosaicing Process
(67) S comprises a bounded set (Gaughan, 1997). Hence, the search space for demosaicing is also bounded, limited by the total number of pixels that one or a combination of the monochromatic images can be shifted by. In accordance with one or more embodiments of the invention, first define the set of monochromatic images M whose dimensions are
(68)
Then define a candidate demosaiced image I, such that:
(69) I∈S, the set of all possible demosaiced images.
(70) The candidate demosaicing scheme can belong to one of the images presented above, or a combination of these images.
(71) So, the set of all demosaiced images is known and hence, the set of solutions to the problem of demosaicing.
(72) Computation of the Difference Image Through Disparity Decomposition
(73) Taking advantage of both CPU and GPU computation capabilities that are available on both the CPU and the GPU of a computer or mobile device (although any appropriate processor available may be employed, one can generate disparity decomposition in a manner similar to that described in U.S. patent application Ser. Nos. 12/784,123; 12/784,022; 13/025,038; 13/025,055; 13/025,070; 13/297,029; 13/297,144; 13/294,481; and Ser. No. 13/316,606, the entire contents of each of these application being incorporated herein by reference. Note that disparity decomposition may be performed along the vertical, horizontal, or diagonal directions, as well as any combinations of such directions. The invention also contemplates an alternative approach in which image data is first masked for skin tone providing an initial demosaiced image, and then run through disparity decomposition on the masked images.
(74) Note that in accordance with the invention, it is possible to shift along any direction. So, using the shift along the diagonal allows taking advantage of two LR imagers that are sensitive to the same light color. For instance, one can shift along the diagonals for two green images, thus providing an additional means of computing depth from two identical channels. An example of shifting along the diagonals is presented in
(75) Putting it all Together—Adaptive Real-Time Demosaicing
(76) A new form of demosaicing is then defined in accordance with the various embodiments of the invention, one which adaptively changes a virtual (or synthetic) pixel-wise Bayer pattern that is constantly changing. By realigning depth per-pixel, an updated well-aligned image is always generated that addresses the demosaicing artifacts that are associated with pixel-wise sets as defined in the work.
(77) Operation Under Low Lighting Conditions
(78) The concept of a synthetic Bayer pattern can be extended to the operation of the system under low lighting conditions. This can be accomplished by either interleaving a clear or an IR version sensor, or both, with the monochromatic LR sensors. Such an embodiment of the invention is presented in
(79) The standard demosaicing approach can be modified to add a scale factor, based on the response of the pixel-level elements from the clear or IR version. So, the pixel-wise element set, described earlier, can be rewritten as:
I.sub.∞(x,y,∞)={I.sub.R(x,y),I.sub.g.sub.
(80) where I.sub.C represents the contribution from the clear channel image. Note that one of the other channels can be replaced by a clear channel or an IR channel. This is very similar to what has recently been suggested by (Aptina's Clarity+ Solution, 2013). Although this approach moves away from the standard Bayer pattern, it is however in keeping with the Color Filter Array (CFA) configuration that is standard for the utilization of demosaicing algorithms and hence can still have our approach of a synthetic, reconfigurable Bayer pattern be applicable. With a clear LR sensor integrated, the green image is extracted subtractively. More importantly, because the clear sensor is panchromatic, it is able to not only capture a significant component of the green channel, but also capture lower lux values and integrate such values at a significantly greater influence, to produce quality HR and SR images under darker lighting conditions.
(81) A block diagram, is presented in
(82) This step is then followed by the process of modifying the ordered set to adapt to the computed offsets in step 1580, and is achieved by segmentation/disparity decomposition, in which regions in each of the monochrome images are first segmented, and then their disparities are computed. Temporal stability analysis as well as other salient features extraction is then attempted. Once the disparity that is associated with every pixel in the scene is known, one can then modify the initial infinity-based ordered set to produce a new demosaiced ordered set at step 1590, based on the disparity computation. This new ordered set will then constitute a final version of the image for that particular frame. The whole process is then attempted again for each frame at step 1595. Temporal stability can also be used to minimize computational demands, i.e. keeping track of only changes in the field of view, per one or more of the applications incorporated by reference noted above. This is preferable, since objects move in the field-of-view, and hence, it is important to take that aspect into account in an adaptive demosaicing Bayer pattern.
(83) Relevance to Super Resolution Techniques
(84) Although the proposed approach defines disparity as per-image, extrapolated patterns of disparity can be achieved from the disparity decomposition image that was defined earlier. Disparity decomposition goes beyond row-wise and column-wise decompositions, and can take the path of any curve that can be defined and traced along the curve's path. This is made possible because once all the decomposition images are created, a complete and ordered set of disparity decompositions can be used to extract the depth map. This is first performed by defining row-wise extracted differences, and then extended to extract intra-row information as well. This is relatively easy to accomplish, so long as the row-wise disparities are well-defined in the prior step of disparity decomposition, as described above. In a sense, a three-dimensional map is generated from the set of LR images, not just a single SR or HR image. Rather, a set of HR images, most of which are not complete due to visual occlusions in the field-of-view are instead generated.
(85) Comparison with Existing Super Resolution (SR) Techniques
(86) Although the technique set forth in accordance with the various embodiments of the present invention is not considered a conventional technique for super resolution, it does produce an image that is of significantly higher resolution than the set of observation images that are associated with the problem at hand. Image super resolution is a discipline of image processing that attempts to generate high-quality, high-resolution images from a set of low-resolution and/or low-quality images (Nguyen, 2000). Most super resolution techniques employ multiframe super resolution, using temporal information to glean and extract further spatial details. Ultimately, the goal of super resolution is to provide for sub-pixel resolution, relative to the coordinate system that is associated with the original dataset, from the set of low-resolution observations/images. The approach set forth in accordance with the various embodiments of the present invention will be contrasted with these prior art super resolution techniques and highlight the uniqueness of the inventive approach relative to these existing super resolution techniques. In (Nguyen, 2000) more conventional super resolution techniques answer the question “Given a set of M×N observations, and a resolution enhancement factor of r, how does one reconstruct an enhanced image rM×rN”.
(87) The problem is formulated as that of interpolating between a set of data that have been sampled on a theoretical higher-resolution grid. For instance, a low resolution frame, f.sub.k, is given by:
f.sub.k=DC.sub.kE.sub.kx+n.sub.k,1≤k≤p
(88) where D is the down-sampling operator, C.sub.k represents the blurring/averaging operator, E.sub.k represent the affine transforms that map the HR grid coordinate system to LR. x is the unknown and ideal HR image, and n.sub.k is additive white noise. We note also that (Nguyen, 2000):
f=Hx+n
(89) where H would be a complete system matrix with dimensions defined as pMNxr.sup.2MN. The dimensions of H are directly related to the number of data samples and unknowns, which is usually a large and computationally cumbersome number. Most approaches of super resolution techniques employ variants to the idea of understanding H, along with what can be extracted and interpolated on a HR grid coordinate system. This computational complexity in the approaches reflects upon attempts aimed at extracting structure, redundancy, as well as irregularities, among other salient features, to try and reduce this highly complex problem into a more manageable problem set. See for example
(90) Multichannel Super Resolution
(91) For reasons that are related to the assumed input LR set to the inventive system, multichannel SR will be discussed briefly as well. Multichannel SR is an area of SR that is often referenced, and employs the utilization of various light-sensitive frames with lower resolution imagers, such as low-resolution frames of primary color images that are subtending the same scene. These lower resolution frames generate observations that are slightly offset from one another and can be used to generate a higher resolution image. This is mentioned because one set of inputs to the various approaches in accordance with the present invention consists of such low-resolution primary color frames (red, green, blue).
(92) As noted with respect to
(93) Image Restoration
(94) A special case of SR is image restoration, in which an ordered linear set is created from the lower resolution observations through regularly sampled data from the LR images. Image restoration strives to “restore” or rather, rearrange the lower resolution pixels onto a fixed pattern of the high-resolution coordinate grid. An illustration of image restoration is presented in
(95) Application—Manipulating an image with front-facing camera while taking a picture: Pinch and Zoom Functionality
(96) Enabling a smartphone with two forward-facing cameras or a micro lens array camera to allow the user to control the zoom action of the camera application by using a pinch gesture in a 3D environment; thus allowing for the user to achieve the appropriate zoom level without having to move the phone from the current location (i.e., not having to touch the screen while actively aiming) or obscuring the screen/objects in FOV. See
(97) Hence, present a method is presented for enabling a smartphone with a microlens array, or alternatively, two forward-facing cameras to allow the user to take pictures by using a set of predefined gestures in a 3D environment, such as a pinch gesture or a thumbs-up gesture. This would allow the user to take faster photos without obscuring the objects in the field of view. Alternatively, users can change the zoom by moving their hand back and forth in the direction of and away from the phone.
(98) Another implementation in accordance with the present invention involves enabling a smartphone camera with gesture controls to also allow the phone to be used with a tripod so that the user can take self-portraits from a 1 m-3 m distance, so a telescoping tripod attachment to the phone could be sold as an accessory with the gesture application. Dual sensors can also be used instead of microlens array sensors, on the back facing cameras and the user can take group portraits by gesturing a thumbs-up to the camera. See
(99) Controlling a Mobile Device and Enabling Convergence Applications
(100) Another use case involves using the cell phone as a communication device for convergence applications in the living room. In such a case, the phone is used as the gestural interface/compute engine. A smartTV is used as the main display/render device for running a game or another application. Hence, convergence becomes defined as distributing the workload between the smartphone and the smartTV, in which the smartphone acts as the main compute device for detecting gesture recognition, and the smartTV acts as the main compute device for running various applications as well as cloning the screen. See
(101) All applications described can be applied to a phone, tablet or slate, as well as a watch or other mobile computing device (See
(102) Application— Eye tracking in three dimensions and face mapping
(103) Another application involves face mapping. In such an application, eyes can be tracked by a number of lower resolution monochrome sensors. The sensors are then matched for color and gray value by an algorithm such as the one defined in the applications noted above and incorporated herein by reference, with which depth can be extracted. This is then used to get the pitch and yaw that is associated with the eye movement, as well as temporal tracking. With temporal stability as a feature that is extracted from the depth analysis that is suggested in the applications noted above, more degrees of freedom are associated with the eye movement.
(104) Another variant on this approach identifies the components of the pupils that are white and associates these components with monochromatic responses in multiple monochromatic LR sensors. Matching is then attempted on the redundant information in a number of the channels, such that these redundancies are exploited to further extract features that can be used for matching. These include shape as well as temporal stability that has been mentioned above. Given the location of the camera, information about a mobile device, as well as accurate 3D eye tracking with six degrees of freedom, tracking of a person's gaze can be done quite accurately by mapping the gaze onto the screen.
(105) Application— Face Mapping
(106) Full face mapping can also be approached in a similar manner to what has been described for eye tracking, i.e., specifically, face tracking is done by exploiting gray scale redundancies across monochrome lower-resolution images of the face. These redundancies, according to the process that has been described in the applications incorporated herein by reference, noted above, can then be segmented, disparity decomposed, matched, and then matched for higher resolution images as well.
(107) Note that in all of these applications, stereo imaging, which can be considered a special form of array imaging can be used to replace the suggested configuration. So, in all of these applications, it is important to note the many similarities between stereo imaging and array image sensors.
(108) Application Mapping the Surrounding Environment
(109) An image sensor array can be mounted on eye glasses, such as Google® glass, but with or without the augmented reality portion. A pedestrian may point at an object of interest and get information about the object, its location, as well as other information directly previewed on a wearable display, such as a smart watch.
(110) Application— Measurement device
(111) A measuring device is further presented in accordance with an alternative embodiment of the invention takes advantage of the three-dimensional measurement capabilities of the sensor. Specifically, an interactive device may be developed that allows the user to highlight a region in the field of view of the camera. For instance, a coffee table can be highlighted by the user through the touchscreen on their phone or tablet, or through pointing or the like as noted above with respect to the other embodiments of this invention. The coordinates of the highlighted region are then preferably extracted from the touchscreen, as well as the associated segmented region of interest. The region may then be further segmented in all the lower resolution component images, be they monochromatic, saturation-based, or having any other color or imaging attribute. Measurement features, such as distance from sensor, xyz dimensions, and resolution, may then be extracted and used in an app or other program.
(112) Application— 3D Stitching and 3D Mosaicing
(113) Once a three-dimensional representation of a scene is extracted, a user can move around and images with three-dimensional information built into them can then be stitched together, effectively creating a rendering of the surroundings.
(114) Environmental Awareness
(115) All of the different types of feature extraction tools enable a system to be more environmentally aware. A system that can track all of these different features can also enable other aspects of environmental awareness as well. For instance, a system that is used as a measuring tool can also be used to map the environment around them, by combining image mosaicing with the tools that have been described above.
(116) The method and apparatus of the invention may be implemented on one or more computing devices including appropriate image acquisition sensors as noted in accordance with the application, one or more processors and associated storage devices, such as one or more known non-transitory storage media for storing images, computer code and the like. Additional computing elements may be employed on one or more local devices, one or more cloud computing environments, or both. It is anticipated that one or more computer programs for implementing the method may be stored to the non-transitory storage medium and cause a general purpose CPU, processor, GPU or other computing element to perform one or more elements or steps in accordance with on or more embodiments of the present invention.
(117) It will thus be seen that the objects set forth above, among those made apparent from the preceding descriptions, are efficiently attained and, because certain changes may be made in carrying out the above method and in the construction(s) set forth without departing from the spirit and scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
(118) It is also to be understood that this description is intended to cover all of the generic and specific features of the invention herein described and all statements of the scope of the invention which, as a matter of language, might be said to fall there between.