ELECTRONIC DEROTATION OF PICTURE-IN-PICTURE IMAGERY
20210295471 · 2021-09-23
Assignee
Inventors
- Marcos Bird (McKinney, TX, US)
- Liam Skoyles (McKinney, TX, US)
- Christopher J. Beardsley (McKinney, TX)
Cpc classification
H04N5/45
ELECTRICITY
International classification
Abstract
Electronically derotating a picture-in-picture video source can be used to independently derotate a secondary video source separate a primary video source. A method of electronically derotating a picture-in-picture image are described herein, the method comprising processing a first image having a first image primary axis; processing a second image having a second image primary axis; derotating the second image around the second image primary axis to align the second image primary axis substantially parallel with the first image primary axis; and displaying the first image and the second image on a display.
Claims
1. A method of electronically derotating a picture-in-picture image comprising: processing a first image having a first image primary axis; processing a second image having a second image primary axis; derotating the second image around the second image primary axis to align the second image primary axis substantially parallel with the first image primary axis; and displaying the first image and the second image on a display.
2. The method of electronically derotating a picture-in-picture image in claim 1 wherein derotating the second image around the second image primary axis comprises interpolating pixel values based on neighboring pixels.
3. The method of electronically derotating a picture-in-picture image in claim 2 wherein interpolating pixel values based on neighboring pixels comprises four by four bicubic interpolation.
4. The method of electronically derotating a picture-in-picture image in claim 2 wherein interpolating pixel values based on neighboring pixels comprises computing an average pixel value of other nearby pixels.
5. The method of electronically derotating a picture-in-picture image in claim 2 wherein interpolating pixel values based on neighboring pixels comprises inputting a pixel rotation angle.
6. The method of electronically derotating a picture-in-picture image in claim 2 further comprising storing the pixel values in memory.
7. The method of electronically derotating a picture-in-picture image in claim 1 wherein displaying the first image and second image on a display comprises overlaying the second image on top of the first image.
8. The method of electronically derotating a picture-in-picture image in claim 1 further comprising multiplexing the first image and second image.
9. The method of electronically derotating a picture-in-picture image in claim 1 further comprising derotating the first image around the first image primary axis.
10. The method of electronically derotating a picture-in-picture image in claim 1 further comprising processing a programmable image center for rotation.
11. A method of electronically derotating a picture-in-picture image comprising: processing a picture-in-picture image, the picture-in-picture image comprising pixels; interpolating a pixel of the picture-in-picture image to derotate the interpolated pixel to form a derotated interpolated pixel; compiling derotated interpolated pixels to form a derotated picture-in-picture image; and presenting the derotated picture-in-picture image simultaneously with a primary image.
12. The method of electronically derotating a picture-in-picture image in claim 11 wherein interpolating a pixel of the picture-in-picture image comprises computing an average pixel value of other nearby pixels.
13. The method of electronically derotating a picture-in-picture image in claim 11 wherein interpolating a pixel of the picture-in-picture image comprises inputting the rotation angle of other nearby pixels, the rotation angle relative to a primary image axis.
14. The method of electronically derotating a picture-in-picture image in claim 11 wherein interpolating a pixel of the picture-in-picture image comprises inputting the intensity of the other nearby pixels.
15. The method of electronically derotating a picture-in-picture image in claim 11 wherein interpolating a pixel of the picture-in-picture image comprises inputting the position of the other nearby pixels.
16. The method of electronically derotating a picture-in-picture image in claim 12 wherein computing an average pixel value of other nearby pixels comprises assigning a higher weight to the most proximate nearby pixels of a pixel to be interpolated.
17. The method of electronically derotating a picture-in-picture image in claim 12 wherein computing an average pixel value of other nearby pixels comprises computing an average of sixteen nearby pixels.
18. The method of electronically derotating a picture-in-picture image in claim 11 further comprising compiling derotated picture-in-picture images to form a derotated output picture-in-picture video source.
19. A method of electronically resizing a picture-in-picture image comprising: processing a first image having a first image primary axis; processing a second image having a second image primary axis; resizing the second image with respect to the second image primary axis to align the second image primary axis substantially parallel with the first image primary axis; displaying the first image and the second image on a display.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] So that those having ordinary skill in the art to which the disclosed system pertains will more readily understand how to make and use the same, reference may be had to the following drawings.
[0026]
[0027]
[0028]
[0029]
DETAILED DESCRIPTION
[0030] The subject technology overcomes many of the prior art problems associated with derotating multiple video sources. In brief summary, the subject technology provides for a method that electronically derotates imagery, to be displayed as a picture-in-picture within a primary image display. The advantages, and other features of the systems and methods disclosed herein, will become more readily apparent to those having ordinary skill in the art from the following detailed description of certain preferred embodiments taken in conjunction with the drawings which set forth representative embodiments of the present invention. Like reference numerals are used herein to denote like parts. Further, words denoting orientation such as “upper”, “lower”, “distal”, and “proximate” are merely used to help describe the location of components with respect to one another. For example, an “upper” surface of a part is merely meant to describe a surface that is separate from the “lower” surface of that same part. No words denoting orientation are used to describe an absolute orientation (i.e. where an “upper” part must always be on top).
[0031] Referring now to
[0032] An operator may select which video source is to be distinguished as the picture-in-picture source and which source is to be distinguished as the primary source. A picture-in-picture video source may be collected from a camera, sensor, or the like. The picture-in-picture video source, primary video source, or both may require accurate rotation relative to the operator due to the movement or rotation of the video source.
[0033] Initially, according to an aspect of the subject technology, the picture-in-picture video source may be written onto a memory unit 103. The memory unit 103 may be dynamic random-access memory, static access memory, serial access memory, direct access memory, cache memory, auxiliary memory, serial ATA storage, solid-state storage, a computer system interface, a parallel advanced technology attachment drive, electro-mechanical data storage device, or the like. The memory unit may comprise any organization, for example 2M×36-bit or the like, or comprise any operating mode, for example QDR II or the like. The picture-in-picture video source may be written into or read out of the memory unit using existing frequency, i.e., faster or slower than the primary video source frequency. In one embodiment of the subject technology, the picture-in-picture video source may write onto the memory unit at a 120 Hertz rate or read out of the memory unit a 120 Hertz rate, equal to or different from its existing frequency, independent of the primary video source frequency. The picture-in-picture video source may also be read out of the memory unit 103 at a frequency equal to the primary video source rate so as to allow an operator to downsample or upsample the picture-in-picture video source to match the primary video source frequency.
[0034] For each pixel in each frame of the picture-in-video source, a corresponding neighboring pixel or several neighboring pixels are read out of memory unit 103 in bursts. In one embodiment of the subject technology, the picture-in-picture video source may be a '720p source comprising 1280 by 720 pixels per frame. Thus, for each of the 921,600 pixels in each picture-in-picture video source frame, a burst of a corresponding neighboring pixel or several neighboring pixels may read out. In a preferred embodiment, for each pixel in each picture-in-picture video source frame, 16 neighboring pixels are read out of memory unit 103. In other embodiments, 1 neighboring pixel, 4 neighboring pixels, 9 neighboring pixels, or a higher order of neighboring pixels may be read from memory unit 103 corresponding to each pixel in each frame of the picture-in-picture video source. The neighboring pixels are used to interpolate the initial pixel value at a new location, rotation, color or intensity or any combination of location, rotation, color and intensity.
[0035] The neighboring pixels are read out into an interpolation filter 104. Therein, for each pixel in each frame of the picture-in-picture video source, neighboring pixels, 16 for example, are interpolated to provide a new pixel value of the initial pixel at a different location, rotation, color or intensity or any combination of location, rotation, color and intensity. For each frame of the picture-in-picture video source, a programmable image center, or primary axis, may be retrieved. The primary axis may be the optical image center however. For each frame and corresponding primary axis, a rotation relative to the primary axis may be measured by a resolver, gyroscope, or other measurement device. Interpolation may comprise computing an average pixel rotation value relative the primary axis to predict a pixel rotation value for the initial pixel at a different rotation. Interpolation may also comprise computing an average pixel value of a neighboring pixel or pixels corresponding to, at least in part, the color, intensity, or position of the picture-in-picture source frame. In computing an average pixel value, the closest neighboring pixels to the initial pixel may be assigned a higher weight.
[0036] This process may be repeated across each frame of the picture-in-picture video source. Using the new pixel values of each frame of the picture-in-picture source, a new, derotated or resized image is processed and can be stored in memory unit 105. The memory unit 105 may be dynamic random-access memory, static access memory, serial access memory, direct access memory, cache memory, auxiliary memory, serial ATA storage, solid-state storage, a computer system interface, a parallel advanced technology attachment drive, electro-mechanical data storage device, or the like. The memory unit may comprise any organization, for example 32M×64-bit or the like, or comprise any operating mode, for example QDR II or the like. The derotated picture-in-picture video source may be written into or read out of the memory unit 105 at a rate equal to the primary video source rate so as to allow an operator to downsample or upsample the derotated picture-in-picture video source to match the primary video source rate.
[0037] Simultaneous to derotating the picture-in-picture video source, the primary video source may or may not be derotated. The primary video source may not require derotating if the primary video source is derived from a stationary optical source collection unit as opposed to a moving optical source collection unit. Alternatively, the primary video source may require derotating if the primary video source is derived from a moving optical source collection unit as opposed to a stationary moving optical source collection unit. Examples of a moving optical source collection units may include, but are not limited to, cameras or sensors mounted to an airplane or rocking boat.
[0038] Based on the timing counter 106 associated with the primary video source readout, i.e. 120 Hertz, and the desired location of the derotated picture-in-picture video source relative to the primary video source, i.e. in the upper-most right-hand corner of the primary video source, the derotated picture-in-picture video source is then read out of memory unit 105 and multiplexed with the primary video source accordingly. The derotated picture-in-picture video source may be multiplexed with the primary video source by space-division multiplexing, frequency-division multiplexing, time-division multiplexing, polarization-division multiplexing, orbital angular momentum multiplexing, code-division multiplexing, or the like.
[0039] The multiplexed video source may then be transmitted 108 to a display.
[0040] Referring now to
[0041] Referring now to
[0042] It should be appreciated by those of ordinary skill in the pertinent art that the hardware embodiment of the subject technology may comprise a single or several external input devices, a single or several memory units, a single or several processors, a single or several displays, or a single or several field-programmable gate arrays.
[0043] Referring now to
[0044] The selected source is transmitted to a communications link 405. The communications link 405 may standardize the connection between the external device input and a subsequent frame grabber. A non-uniformity correction unit (NUC) 406 may be employed depending on the type of corresponding external device source. Generally, a non-uniformity correction unit is not required for visible light sensor sources since visible light sensor detector responses are relatively uniform. Though, a non-uniformity correction unit may be employed when a corresponding external device transmits radio, microwave, infrared, ultraviolet, x-ray, or gamma ray signal to the field-programmable gate array. Thus, a mid-wave infrared sensor may require a non-uniformity correction unit within the field-programmable field array. The non-uniformity correction unit may be employed on any source path, and as such may be employed prior to transmission to the Serializer/Deserializer (SERDES) pair of functional blocks 410.
[0045] The selected source, may thereafter be transmitted to the SERDES pair of functional blocks 410 to compensate for potential limited input/output. The SERDES function architecture may comprise parallel clock SERDES, embedded clock SERDES, 8b/10b SERDES, bit interleaved SERDES, or the like. The selected source is multiplexed 411 and each frame of the source may be written into the memory unit 412. The memory unit 412 may be dynamic random-access memory, static access memory, serial access memory, direct access memory, cache memory, auxiliary memory, serial ATA storage, solid-state storage, a computer system interface, a parallel advanced technology attachment drive, electro-mechanical data storage device, or the like. In the illustrative embodiment, the memory unit 412 is QDR SRAM to provide high pixel throughput. The memory unit may comprise any organization, for example 2M×36-bit or the like, or comprise any operating mode, for example QDR II or the like.
[0046] For each frame of the selected source, the memory controller 413 may receive the programmable image center for rotation, or image primary axis, thus providing a flexible architecture when selected source images are not optically centered. Though, the memory controller may receive the optical image center for rotation, or image primary axis, alternatively. In addition, the memory controller 413 may receive the rotation angle for each frame or each pixel of each frame of the selected source relative to the primary axis of the image. The rotation angle, or roll angle may sensed by a measurement device, whether the measurement device is a resolver, gyroscope, or other measurement device, the measurement device capable of transmitting the rotation angle for each frame of the selected source to the memory controller 412. The measurement device may be located internally or externally relative to the single or various field-programmable gate arrays.
[0047] The interpolation filter 414 may interpolate the selected source image using the rotation angle of the selected source frame or each pixel of each frame relative to the primary axis. Thus, for each pixel in each frame of the selected source, neighboring pixels, 16 for example, are interpolated to provide a new pixel value of the initial pixel at a different rotation to provide for a derotated output pixel position. Interpolation is repeated until a derotated output pixel position is calculated for every pixel in the output frame. An algorithm of the user's choice, such as a trigonometric function, may be implemented to calculate the derotated output pixel position for every pixel in the output frame.
[0048] Interpolation may also comprise computing a new pixel value of an initial pixel corresponding to, at least in part, the color, intensity, or position of the selected source frame or each pixel of each frame, to provide a new pixel value of the initial pixel with a different color, intensity, or position. In interpolating each pixel in each frame of the selected source, the closest neighboring pixels to the initial pixel may be assigned a higher weight.
[0049] The output pixel is then written into a memory unit 416 at its computed rotation. The memory unit 416 may be dynamic random-access memory, static access memory, serial access memory, direct access memory, cache memory, auxiliary memory, serial ATA storage, solid-state storage, a computer system interface, a parallel advanced technology attachment drive, electro-mechanical data storage device, or the like. In the illustrative embodiment, the memory unit 416 is DDR2 SDRAM. The memory unit may comprise any organization, for example 32M×64-bit or the like, or comprise any operating mode, for example QDR II or the like.
[0050] The output pixel may be written into the memory unit 416 corresponding to its computed color, intensity, or position also. A filler pixel may be written into the memory unit 416 when the output frame exceeds the input image pixel size, the filler pixel comprising an intensity, color, or position. The filler pixel may comprise an average intensity, color, or position corresponding to neighboring output pixels.
[0051] The output frame may be manipulated electronically through inversion, reversion, eboresight, or the like using the memory controller 415. The memory controller 415 may then be employed to read out a series of interpolated frames to create a derotated video source which may be altered by a peaking filter 417, the peaking filter comprising the functionality to peak, autofocus, or video mux the derotated video source. The derotated video source thereafter may be multiplexed and displayed with another video source to create picture-in-picture imagery, as described in
[0052] In some situations a sensor video source 403, whether the sensor video source is a mid-wave infrared sensor, a visible and near infrared sensor, or another external device, may not require derotation. In the illustrative embodiment, this video source may similarly be transmitted to a communications link 405 and subsequently a non-uniformity correction unit 406, depending on the external device. This video source may similarly be transmitted to a SERDES pair of functional blocks 408, and may similarly be transmitted to a peaking filter 409. This video source thereafter may be multiplexed and displayed with another video source to create picture-in-picture imagery, as described in
[0053] All orientations and illustrative embodiments of the components shown herein are used by way of example only. Further, it will be appreciated by those of ordinary skill in the pertinent art that the functions of several elements may, in alternative embodiments, be carried out by fewer elements or a single element. Similarly, in some embodiments, any functional element may perform fewer, or different, operations than those described with respect to the illustrated embodiment. Also, functional elements (e.g. memory, processors, displays and the like) shown as distinct for purposes of illustration may be incorporated within other functional elements in a particular implementation.
[0054] While the subject technology has been described with respect to preferred embodiments, those skilled in the art will readily appreciate that various changes and/or modifications can be made to the subject technology without departing from the spirit or scope of the subject technology. For example, each claim may depend from any or all claims in a multiple dependent manner even though such has not been originally claimed.