Device, system and method for content-adaptive resolution-enhancement
09736442 · 2017-08-15
Assignee
Inventors
- Alexander Wong (Waterloo, CA)
- Yaguang Li (Milton, CA)
- Mark Lamm (Mississagua, CA)
- Hicham Sekkati (Longueuil, CA)
Cpc classification
International classification
H04N7/01
ELECTRICITY
H04N11/20
ELECTRICITY
Abstract
A device, system and method for content-adaptive resolution-enhancement is provided. A plurality of subframe streams are generated from a video stream, each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another. A plurality of output subframe streams are generated from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream. One or more projectors are controlled to project the plurality of output subframe streams to combine the plurality of output subframe streams into a higher resolution projected video stream.
Claims
1. A device comprising: a controller and a communication interface configured to communicate with one or more projectors, the controller configured to: generate a plurality of subframe streams from a video stream; each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another; generate a plurality of output subframe streams from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams by: converting each of the plurality of subframe streams from a spatial domain to a frequency domain; applying a respective video enhancement filter in the frequency domain; and converting the respective output subframe stream back to the spatial domain, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream; and, control, using the communication interface, the one or more projectors to project the plurality of output subframe streams, thereby combining the plurality of output subframe streams into a higher resolution projected video stream.
2. The device of claim 1, wherein the controller is further configured to generate the plurality of subframe streams by one more of resampling, upsampling and downsampling the video stream.
3. The device of claim 1, wherein the controller is further configured to apply one or more of the plurality of video enhancement filters to each of the plurality of subframe streams by: applying a further respective video enhancement filter in a spatial domain.
4. The device of claim 1, wherein the plurality of video enhancement filters comprises: a first enhancement video filter for enhancing moving objects in the video stream, and a second enhancement video filter for enhancing static objects in the video stream.
5. The device of claim 4, wherein the one or more resulting enhanced subframe streams comprises a first enhanced subframe stream enhanced for the moving objects, and a second enhanced subframe stream enhanced for the static objects.
6. The device of claim 5, wherein the controller is further configured to combine the first enhanced subframe stream and the second enhanced subframe stream into the respective output subframe stream based on the data in the one or more regions of the video stream by: determining respective regions where the moving objects and the static objects are located in the video stream; and including corresponding portions of the first enhanced subframe stream in moving object regions and including corresponding portions of the second enhanced subframe stream in static object regions.
7. The device of claim 1, wherein the controller is further configured to determine the data in the one or more regions of the video stream by comparing successive frames of the video stream.
8. The device of claim 1, wherein the plurality of video enhancement filters comprises one or more of: a moving object video enhancement filter, a static object video enhancement filter, a text enhancement filter, a texture enhancement filter, and a color enhancement filter.
9. The device of claim 1, wherein the controller is further configured to apply a compensation filter to each of respective enhanced subframe streams, the compensation filter for compensating for optical aberrations of the one or more projectors.
10. A method comprising: at a device configured to communicate with one or more projectors, generating, at the device, a plurality of subframe streams from a video stream; each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another; generating, at the device, a plurality of output subframe streams from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams by: converting each of the plurality of subframe streams from a spatial domain to a frequency domain; applying a respective video enhancement filter in the frequency domain; and converting the respective output subframe stream back to the spatial domain, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream; and, controlling, using the device, the one or more projectors to project the plurality of output subframe streams, thereby combining the plurality of output subframe streams into a higher resolution projected video stream.
11. The method of claim 10, further comprising generating, at the device, the plurality of subframe streams by one more of resampling, upsampling and downsampling the video stream.
12. The method of claim 10, further comprising applying, at the device, one or more of the plurality of video enhancement filters to each of the plurality of subframe streams by: applying a further respective video enhancement filter in a spatial domain.
13. The method of claim 10, wherein the plurality of video enhancement filters comprises: a first enhancement video filter for enhancing moving objects in the video stream, and a second enhancement video filter for enhancing static objects in the video stream.
14. The method of claim 13, wherein the one or more resulting enhanced subframe streams comprises a first enhanced subframe stream enhanced for the moving objects, and a second enhanced subframe stream enhanced for the static objects, and the method further comprising combining, at the device, the first enhanced subframe stream and the second enhanced subframe stream into the respective output subframe stream based on the data in the one or more regions of the video stream by: determining respective regions where the moving objects and the static objects are located in the video stream; and including corresponding portions of the first enhanced subframe stream in moving object regions and including corresponding portions of the second enhanced subframe stream in static object regions.
15. The method of claim 10, further comprising determining, at the device, the data in the one or more regions of the video stream by comparing successive frames of the video stream.
16. The method of claim 10, wherein the plurality of video enhancement filters comprises one or more of: a moving object video enhancement filter, a static object video enhancement filter, a text enhancement filter, a texture enhancement filter, and a color enhancement filter.
17. The method of claim 10, further comprising applying, at the device, a compensation filter to each of respective enhanced subframe streams, the compensation filter for compensating for optical aberrations of the one or more projectors.
18. A non-transitory computer-readable medium storing a computer program, wherein execution of the computer program is for: at a device configured to communicate with one or more projectors, generating, at the device, a plurality of subframe streams from a video stream; each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another; generating, at the device, a plurality of output subframe streams from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams by: converting each of the plurality of subframe streams from a spatial domain to a frequency domain; applying a respective video enhancement filter in the frequency domain; and converting the respective output subframe stream back to the spatial domain, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream; and, controlling, using the device, the one or more projectors to project the plurality of output subframe streams, thereby combining the plurality of output subframe streams into a higher resolution projected video stream.
Description
BRIEF DESCRIPTIONS OF THE DRAWINGS
(1) For a better understanding of the various implementations described herein and to show more clearly how they may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION
(14) Attention is directed to
(15) As depicted, controller 101 comprises a memory 122 and a communication interface 124 (interchangeably referred to as interface 124) configured to communicate with one or more projectors 107. Controller 101 is configured to: generate a plurality of subframe streams from a video stream; each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another; generate a plurality of output subframe streams from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream; and, control, using the communication interface 124, one or more projectors 107 to project the plurality of output subframe streams thereby combining the plurality of output subframe streams into a higher resolution projected video stream.
(16) While two projectors 107 are depicted, system 100 can comprise more than two projectors 107 and as few as one projector 107. Each projector 107 comprises a projector configured to project images, including but not limited to a digital projector, a cinema projector, an LCOS (Liquid Crystal on Silicon) based projector, a DMD (digital multimirror device) based projector and the like. In particular, one or more projectors 107 are configured to project pixel-shifted images and combine them into a higher resolution image. For example, each of projectors 107 can use different respective pixel registrations to project pixel-shifted images such that they are shifted and/or transformed with respect to one another at screen 109 such that similar regions in each of the pixel-shifted images are co-projected onto each other so form a higher resolution version thereof. When only one projector 107 is present in system 100, the one projector 107 can consecutively project the pixel-shifted images onto screen 109 such that an eye of a viewer combines the consecutively projected pixel-shifted images viewed on screen 109 into a higher resolution image; in some of these implementations, such a projector can include an opto-mechanical device configured to shift projected images that are, themselves, pixel-shifted from each other, thereby forming a higher resolution image. When two or more projectors 107 are present in system 100, the two or more projectors 107 can co-project the pixel-shifted images onto screen 109 thereby forming a higher resolution image.
(17) Controller 101 can comprise any suitable computing device, including but not limited to a graphics processing unit (GPU), a graphics processing device, a graphics processing engine, a video processing device, a personal computer (PC), a server, and the like, and generally comprises memory 122 and communication interface 124 (interchangeably referred to hereafter as interface 124) and optionally a display device (not depicted) and at least one input device (not depicted) which, when present, can be external to controller 101 and in communication with controller 101 via interface 124.
(18) Controller 101 further comprises a processor and/or a plurality of processors, including but not limited to one or more central processors (CPUs) and/or one or more processing units and/or one or more graphic processing units (GPUs); either way, controller 101 comprises a hardware element and/or a hardware processor. Indeed, in some implementations, controller 101 can comprise an ASIC (application-specific integrated circuit) and/or an FPGA (field-programmable gate array) specifically configured to implement the functionality of controller 101.
(19) In other words, controller 101 can be specifically adapted for content-adaptive resolution-enhancement. Hence, controller 101 is preferably not a generic computing device, but a device specifically configured to implement specific geometric warping correction functionality in projection mapping. For example, controller 101 can specifically comprise a computer executable engine configured to implement specific content-adaptive resolution-enhancement, as described below.
(20) Memory 122 can comprise a non-volatile storage unit (e.g. Erasable Electronic Programmable Read Only Memory (“EEPROM”), Flash Memory) and a volatile storage unit (e.g. random access memory (“RAM”)). Programming instructions that implement the functional teachings of controller 101 as described herein are typically maintained, persistently, in memory 122 and used by controller 101 which makes appropriate utilization of volatile storage during the execution of such programming instructions. Those skilled in the art recognize that memory 122 is an example of computer readable media that can store programming instructions executable by controller 101. Furthermore, memory 122 is also an example of a memory unit and/or memory module and/or a non-volatile memory.
(21) Memory 122 generally stores an application 136 which, when processed by controller 101, enables controller 101 to: generate a plurality of subframe streams from a video stream; each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another; generate a plurality of output subframe streams from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream; and, control, using the communication interface 124, one or more projectors 107 to project the plurality of output subframe streams, pixel-shifted from one another, thereby combining the plurality of output subframe streams into a higher resolution projected video stream.
(22) Memory 122 can further store data 137 which, when processed by controller 101 can generate a high resolution video stream. Controller 101 can hence generally comprise an image generator and/or renderer, for example a computing device, a server and the like, configured to generate and/or render a video stream from data 137. Such data 137 can include, but is not limited to, still images, video and the like. Furthermore, controller 101 can be in communication with, and/or comprise, an image generator and/or a memory (which can include memory 122) storing data from which data 137 can be generated and/or rendered. Alternatively, controller 101 can generate data 137 (e.g. image data and/or video data) using algorithms, and the like, for generating a video stream.
(23) In general, a resolution of each projector 107 is lower than a resolution of a video stream generated from data 137, and the like. Hence, it is appreciated that, herein, the terms “high resolution” and/or “higher resolution” and/or “low resolution” and/or “lower resolution”, as applied herein refer to a relative resolution of an image modulator(s) at projector(s) 107 as compared to a resolution of a video stream produced from data 137 and/or by controller 101. Hence, before a video stream produced by controller 101 can be projected by projectors 107, the video stream is modified to a resolution compatible with projector(s) 107, as discussed in detail below.
(24) Interface 124 comprises any suitable wired and/or wireless communication interface configured to communicate with projectors 107 in a wired and/or wireless manner as desired. Hence, communication links (represented as lines) between controller 101 and projectors 107 can be wired and/or wireless communication links.
(25) While not depicted, controller 101 can further comprise a power source, including but not limited to a battery and/or a power pack, and/or a connection to a power, or any other suitable power source, as well as a housing and the like.
(26) In any event, it should be understood that a wide variety of configurations for controller 101 are contemplated.
(27) While not depicted system 100 and/or controller 101 can further comprise an alignment system, one or more cameras, a warping engine, and the like. Such components can be used to warp and/or align video streams and/or images for projection onto screen 109. Furthermore, while present implementations are described with respect to projecting video streams onto screen 109, in other implementations video streams can be projected onto other objects, including, but not limited to three-dimensional objects, for example in projection mapping applications. Similarly, in yet further implementations, a plurality of projectors 107 can project a plurality of video streams onto screen 109 (and/or three-dimensional objects, and the like) in image tiling applications.
(28) Attention is now directed to
(29) Regardless, it is to be emphasized, that method 200 need not be performed in the exact sequence as shown, unless otherwise indicated; and likewise various blocks may be performed in parallel rather than in sequence; hence the elements of method 200 are referred to herein as “blocks” rather than “steps”. It is also to be understood, however, that method 200 can be implemented on variations of system 100 as well. Furthermore, while controller 101 is described as implementing and/or performing each block of method 200, it is appreciated that each block of method 200 occurs using controller 101 processing application 136.
(30) At block 201, controller 101 generates a plurality of subframe streams from a video stream; each of the plurality of subframe streams comprising a lower resolution version of the video stream, pixel-shifted from one another.
(31) At block 203, controller 101 generates a plurality of output subframe streams from the plurality of subframe streams in a one-to-one relationship by: applying a plurality of video enhancement filters to each of the plurality of subframe streams, each of the plurality of video enhancement filters for enhancing different features of the video stream; and, combining one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of the video stream.
(32) At an optional block 205, controller 101 applies a compensation filter to each of respective output subframe streams, the compensation filter for compensating for optical aberrations of the one or more projectors. Block 205 can occur in conjunction with block 203, as described below.
(33) At block 207, controller 101 controls, using communication interface 124, one or more projectors 107 to project the plurality of output subframe streams thereby combining the plurality of output subframe streams into a higher resolution projected video stream. For example, the plurality of output subframe streams can be projected shifted from one another, and/or using respective shifted pixel registrations as described below.
(34) Method 200 will now be described with reference to
(35) Specifically,
(36) In particular, in
(37) At sampler module 301, as depicted, controller 101 optionally generates a video stream I(t), for example using data 137; alternatively, video stream I(t) can be received from an external image generator. As depicted, video stream I(t) is represented as a function of time “t”. In particular, video stream I(t) comprises frames and/or video frames, each of which can comprise images.
(38) At sampler module 301, controller 101 generates (e.g. at block 201 of method 200), a plurality of subframe streams I.sub.1(t), I.sub.2(t) from video stream I(t); each of the plurality of subframe streams I.sub.1(t), I.sub.2(t) comprising a lower resolution version of video stream I(t), pixel-shifted from one another. Furthermore, each of plurality of subframe streams I.sub.1(t), I.sub.2(t) have a similar and/or the same aspect ratio as video stream I(t).
(39) As depicted controller 101 generates two subframe streams I.sub.1(t), I.sub.2(t). For example, in
(40) However, in other implementations, controller 101 can generate more than two, or “m”, subframe streams I.sub.1(t), I.sub.2(t) . . . I.sub.m(t), and a number of branches from video stream I(t) can correspond to an “m” number of subframe streams I.sub.1(t), I.sub.2(t) . . . I.sub.m(t), each pixel-shifted from each other and having a similar and/or same aspect ratio as video stream I(t)
(41) For example, a number of subframe streams I.sub.1(t), I.sub.2(t) . . . I.sub.m(t) generated can correspond to a number of projectors 101 (e.g. in system 100, m=2).
(42) Alternatively, a number of subframe streams I.sub.1(t), I.sub.2(t) . . . I.sub.m(t) generated can correspond to a number of subframes that can be consecutively projected by a single projector 107 within a given time period, for example within a frame time period. For example, when only two subframes can be consecutively projected by a single projector within a frame time period, a number of subframe streams I.sub.1(t), I.sub.2(t) . . . I.sub.m(t) generated can comprise two subframe streams (e.g. m=2).
(43) Alternatively, a number of subframe streams I.sub.1(t), I.sub.2(t) . . . I.sub.m(t) generated can be determined from a resolution of video stream I(t) (and/or a number of pixels in a frame of video stream I(t)) as compared to a resolution of projectors 107 (and/or a number of pixels in an image produced by a projector 107). For example, when video stream I(t) has a resolution that is twice that of projectors 107, two subframe streams can be generated (e.g. m=2); similarly, when video stream I(t) has a resolution that is three times that of projectors 107, three subframe streams can be generated (e.g. m=3). In other words, a number of subframe streams generated can be determined from a number of pixels in a frame of video stream I(t) as divided by a number of pixels in an image produced by a projector 107 (and/or a number of pixels of an image modulator of a projector 107).
(44) For the remainder of the present specification, however, it will be assumed that m=2, and that controller 101 generates two subframe streams I.sub.1(t), I.sub.2(t) ((e.g. at block 201 of method 200).
(45) Furthermore, subframe streams I.sub.1(t), I.sub.2(t) are pixel-shifted from one another. Such pixel shifting is represented in sampler module 301 by a shift operator Z.sup.−n, where “n” is a number of pixels by which subframe streams I.sub.1(t), I.sub.2(t) are shifted. For example, for n=1, each of subframe streams I.sub.1(t), I.sub.2(t) are pixel-shifted from one another by one pixel (e.g. as described below with respect to
(46) Furthermore, controller 101 can be further configured to generate the plurality of subframe streams I.sub.1(t), I.sub.2(t) by one more of resampling, upsampling and downsampling video stream I(t). For example, as depicted, sampler module 301 comprises an optional upsampling function 311 and a downsampling function 313. Optional upsampling function 311 can be used by controller 101 to determine pixel data located between existing pixels of frames of video stream I(t). In other words, upsampling function 311 can comprise an interpolation function to determine image data between pixels of frames of video stream I(t). Downsampling function 313 can select pixels of video stream I(t), for example according to shift operator Z.sup.−n. In particular, subframe stream I.sub.1(t) can be generated by upsampling video stream I(t) and applying shift operator Z.sup.−n, followed by downsampling. In some implementations, an output linear resolution of each of subframe streams I.sub.1(t), I.sub.2(t) can be about 1/√2 of an input linear resolution of video stream I(t) and pixel shifting between each of subframe streams I.sub.1(t), I.sub.2(t) can be about ½ pixel of the output linear resolution. As a result, upscaling by a factor of √2, shifting by 1 pixel and downscaling by a factor of 2 can occur
(47) However, shift operator Z.sup.−n, upsampling function 311 and downsampling function 313 are appreciated to be examples only and any sampling functions, and the like, can be used to sample video stream I(t) to generate each of subframe streams I.sub.1(t), I.sub.2(t).
(48) Attention is briefly directed to
(49) Similarly, it is assumed that edges of region 403 of video stream I(t) are static and can be enhanced to emphasize features thereof.
(50) A background region 405 can represent sky, and the like, which can also be non-moving and/or still. In other words, in a successive frame of video stream I(t), moving object 401 will have changed position, while regions 403, 405 will have not changed. In particular, moving object 401 changing position between frames of video stream I(t) refers to pixels in video stream I(t) changing state to represent moving object 401 moving across successive frames and/or successive images of video stream I(t).
(51) While the frame of video stream I(t) is represented as being of a very low resolution (e.g. 10×12), it is appreciated that such a resolution is depicted merely to describe aspects of method 200 and that methods described herein can be applied to very high resolution video streams, including, but not limited to, 4K video streams, UHD (ultra-high definition) video streams, 8K video streams, and higher. Indeed, present implementations can be used to control one or more projectors, having a resolution lower than 4K resolution, to provide 4K video streams.
(52) In any event, after applying at least operator Z.sup.−n, with n=1, and downsampling function to video stream I(t) in a first instance, and downsampling function to video stream I(t) in a second instance, two subframe streams I.sub.1(t), I.sub.2(t) are generated, and
(53) In particular, subframe stream I.sub.1(t) comprises every second pixel of video stream I(t) with pixels of each successive row of subframe stream I.sub.1(t) offset from a previous row by one pixel, the top row starting from the second pixel of the top row of the frame of video stream I(t) depicted in
(54) For example, each of subframe streams I.sub.1(t), I.sub.2(t) include portions of each of moving object 401 and regions 403, 405.
(55) While as depicted, each of subframe streams I.sub.1(t), I.sub.2(t) are pixel-shifted from one another by integer values, when upsampling function 311 is applied to video stream I(t), subframe streams I.sub.1(t), I.sub.2(t) can be pixel-shifted by fractions of a pixel as well.
(56) As depicted, such pixel-shifting is similar for each pixel of each of each of subframe streams I.sub.1(t), I.sub.2(t), however pixel-shifting can include one or more of: different pixel shifts for each pixel of each of subframe streams I.sub.1(t), I.sub.2(t); different pixel shifts for different regions of each of subframe streams I.sub.1(t), I.sub.2(t); rotation of one or more pixels of each of subframe streams I.sub.1(t), I.sub.2(t); and a transformation of one or more pixels of each of subframe streams I.sub.1(t), I.sub.2(t). Hence, in some implementations, the term “pixel-shifted” can be understood to mean that every pixel is translated and/or rotated and/or transformed and/or shifted and/or scaled.
(57) Furthermore, generation of each of subframe streams I.sub.1(t), I.sub.2(t) as depicted in
(58) Regardless, each of pixels in each of each of subframe streams I.sub.1(t), I.sub.2(t) are understood to have different pixel registrations that can be used by projectors 107 to project output subframe streams (produced from each of subframe streams I.sub.1(t), I.sub.2(t)) shifted from one another at screen 109, as described below. For example, pixel registration of pixels of subframe stream I.sub.2(t) are shifted from pixel registration of pixels of subframe stream I.sub.1(t).
(59) Furthermore, a resolution of each of subframe streams I.sub.1(t), I.sub.2(t) can correspond to a resolution of an image modulator of projectors 107. However, each of subframe streams I.sub.1(t), I.sub.2(t) has a similar and/or a same aspect ratio as video stream I(t).
(60) The process of generating subframe streams I.sub.1(t), I.sub.2(t) can also lead to motion artifacts. For example, with reference to subframe streams I.sub.1(t), a motion artifact 409 has been erroneously added to moving object 401.
(61) Returning to
(62) For example, the plurality of video enhancement filters can comprise one or more of: a moving object video enhancement filter, a static object video enhancement filter, a text enhancement filter, a texture enhancement filter, a color enhancement filter, and the like. In other words, each of the plurality of video enhancement filters are applied to each subframe stream I.sub.1(t), I.sub.2(t) to enhance content thereof, regardless of the actual content.
(63) Hence, for example, the plurality of video enhancement filters can comprise: a first enhancement video filter (e.g. Filter.sub.1) for enhancing moving objects in video stream I(t), and a second enhancement video filter (e.g. Filter.sub.2) for enhancing static objects and/or still regions in video stream I(t). Such a first enhancement video filter (e.g. Filter.sub.1) can comprise a high frequency suppression filter which can remove motion artifacts associated with moving objects in each of subframe streams I.sub.1(t), I.sub.2(t); similarly, a second enhancement video filter (e.g. Filter.sub.1) can comprise a high frequency sharpening filter which can enhance edges of still objects and/or non-moving objects in each of subframe streams I.sub.1(t), I.sub.2(t).
(64) Furthermore, as depicted, in some implementations, given video enhancement filters can be applied in a frequency domain. However, as depicted in
(65) Hence, controller 101 can be further configured (e.g. at block 203 of method 200) to apply one or more of the plurality of video enhancement filters (e.g. Filter.sub.1, Filter.sub.2) to each of the plurality of subframe streams I.sub.1(t), I.sub.2(t) by: converting each of the plurality of subframe streams I.sub.1(t), I.sub.2(t) from a spatial domain to a frequency domain (e.g. using FFT function 315); applying a respective video enhancement filter (e.g. Filter.sub.1, Filter.sub.2) in the frequency domain; and converting the respective output subframe stream back to the spatial domain (e.g. using IFFT function 317).
(66) Alternatively, video enhancement filters can be applied in the spatial domain and not in the frequency domain; in such implementations, neither FFT function 315 nor IFFT function 317 is applied to subframe streams I.sub.1(t), I.sub.2(t). Hence, controller 101 can be further configured to apply one or more of the plurality of video enhancement filters to each of the plurality of subframe streams I.sub.1(t), I.sub.2(t) by: applying a respective video enhancement filter (e.g. Filter) in a spatial domain. For example, color enhancement filters and the like could be applied in the spatial domain.
(67) In any event, as represented by arrows extending from each of video enhancement filters Filter.sub.1, Filter.sub.2 . . . . Filter, for each of subframe streams I.sub.1(t), I.sub.2(t), “p” enhanced subframe streams are generated for each of subframe streams I.sub.1(t), I.sub.2(t). For example, when only two video enhancement filters are applied, two enhanced subframe streams are generated for each of subframe streams I.sub.1(t), I.sub.2(t).
(68) As depicted, controller 101 is further configured to apply (e.g. at block 205 of method 200, and at compensation module 305) a compensation filter to each of respective output subframe streams, the compensation filter for compensating for optical aberrations of the one or more projectors. Furthermore, such compensation filters can be applied in a frequency domain or a spatial domain. Furthermore, each compensation filter applied is particular to projector 107-1 or projector 107-2. For example, optical aberrations of each of projectors 107-1, 107-2 can be determined and a corresponding compensation filter can be configured to correct optical aberrations thereof in a corresponding subframe stream.
(69) For example, as depicted, compensation module 305 comprises a compensation filter 1-1 configured to compensate a subframe stream (for example, subframe stream I.sub.1(t)) for optical aberrations of projector 107-1, compensation filter 1-1 further configured to be applied in a frequency domain (e.g. after FFT function 315 is applied to a substream but before IFFT function 317 is applied to the substream). Similarly, compensation module 305 further comprises a compensation filter 2-1 configured to compensate a subframe stream (for example, subframe stream I.sub.1(t)) for optical aberrations of projector 107-1, compensation filter 2-1 further configured to be applied in a spatial domain.
(70) Similarly, compensation module 305 further comprises a compensation filter 1-2 configured to compensate a subframe stream (for example, subframe stream I.sub.2(t)) for optical aberrations of projector 107-2, compensation filter 1-2 further configured to be applied in a frequency domain (e.g. after FFT function 315 is applied to a substream but before IFFT function 317 is applied to the substream). Similarly, compensation module 305 further comprises a compensation filter 2-2 configured to compensate a subframe stream (for example, subframe stream I.sub.2(t)) for optical aberrations of projector 107-2, compensation filter 2-2 further configured to be applied in a spatial domain.
(71) Hence, subframe stream I.sub.1(t), once enhanced and compensated by compensation filters 1-1, 2-1, is specifically configured for projection by projector 107-1. Similarly, subframe stream I.sub.2(t), once enhanced and compensated by compensation filters 1-2, 2-2, is specifically configured for projection by projector 107-2. However, when only one projector 107 is used in system 100, the compensation filters applied to each of subframe streams I.sub.1(t), I.sub.2(t) can be similar. Data representing optical aberrations of each projector 107 can be provisioned at respective compensation filters 1-1, 2-1, 1-2, 2-2 in a provisioning process (not depicted), for example by measuring such optical aberrations in each projector 107 and configuring each respective compensation filter 1-1, 2-1, 1-2, 2-2 to compensate for such optical aberrations.
(72) Furthermore, while in
(73) In some implementations one or more of compensation filters 1-1, 2-1, 1-2, 2-2 can comprise a Wiener Deconvolution filter, but other types of compensation filters and/or deconvolution filters are within the scope of present implementations. For example, filters can be used that compensate for color differences between projectors 107.
(74) In any event, as depicted in
(75) In particular, for subframe streams I.sub.1(t), “p” enhanced subframe streams are depicted: enhanced subframe streams I.sub.E1-1(t), I.sub.E1-2(t) . . . I.sub.E1-p(t), one enhanced subframe stream I.sub.E1 for each of filters Filter.sub.1, Filter.sub.2, . . . Filter.sub.p. Similarly, for subframe streams I.sub.2(t), “p” enhanced subframe streams are depicted: enhanced subframe streams I.sub.E2-1(t), I.sub.E2-2(t) . . . I.sub.E2-p(t), one enhanced subframe stream I.sub.E2 for each of filters Filter.sub.1, Filter.sub.2, . . . Filter.sub.p.
(76) In particular, assuming that a first video enhancement filter Filter.sub.1 is for enhancing moving objects in video stream I(t), and that a second video enhancement filter Filter.sub.2 is for enhancing static objects in video stream I(t), the one or more resulting enhanced subframe streams comprises a first enhanced subframe stream for the moving objects (e.g. enhanced subframe stream I.sub.E1-1(t) for subframe stream I.sub.1(t) and enhanced subframe stream I.sub.E2-1(t) for subframe stream I.sub.2(t)), and a second enhanced subframe stream enhanced for the static objects (e.g. enhanced subframe stream I.sub.E1-2(t) for subframe stream I.sub.1(t) and enhanced subframe stream I.sub.E2-2(t) for subframe stream I.sub.2(t)).
(77) For example, attention is next directed to
(78) It is further assumed in
(79) As resulting enhanced subframe stream I.sub.E1-1(t) has been enhanced for moving objects, high frequency portions of moving object 401 and region 403 have been filtered out of enhanced subframe stream I.sub.E1-1(t), including, but not limited to, motion artifact 409. This results in high frequency motion artifacts in the lower left hand corner of moving object 401 (including, but not limited to, motion artifact 409) being filtered out and/or removed from enhanced subframe stream I.sub.E1-1(t). However, as first video enhancement filter Filter.sub.1 is applied to the entirety of subframe stream I.sub.1(t), region 403 of enhanced subframe stream I.sub.E1-1(t) high frequency portions of region 403 are also filtered out and/or removed.
(80) Similarly, as resulting enhanced subframe stream I.sub.E1-2(t) has been enhanced for static objects, high frequency portions of moving object 401 and region 403 have been enhanced in enhanced subframe stream I.sub.E1-2(t). This results in high frequency portions of region 403 of enhanced subframe stream I.sub.E1-2(t) being enhanced and/or edges thereof being enhanced, which can further increase contrast between region 403 and region 405. However, as second video enhancement filter Filter.sub.2 is applied to the entirety of subframe stream I.sub.1(t), this also results in high frequency motion artifacts in the lower left hand corner of moving object 401 being enhanced in subframe stream I.sub.E1-2(t).
(81) Attention is next directed to
(82) It is further assumed in
(83) As resulting enhanced subframe stream I.sub.E2-1(t) has been enhanced for moving objects, high frequency portions of moving object 401 and region 403 have been filtered out of enhanced subframe stream I.sub.E2-1(t). This results in high frequency motion artifacts in the lower left hand corner of moving object 401 being filtered out and/or removed from enhanced subframe stream I.sub.E2-1(t). However, as first video enhancement filter Filter.sub.1 is applied to the entirety of subframe stream I.sub.2(t), region 403 of enhanced subframe stream I.sub.E2-1(t) high frequency portions of region 403 are also filtered out and/or removed.
(84) Similarly, as resulting enhanced subframe stream I.sub.E2-2(t) has been enhanced for static objects, high frequency portions of moving object 401 and region 403 have been enhanced in enhanced subframe stream I.sub.E2-2(t). This results in high frequency portions of region 403 of enhanced subframe stream I.sub.E2-2(t) being enhanced and/or edges thereof being enhanced. However, as second video enhancement filter Filter.sub.2 is applied to the entirety of subframe stream I.sub.2(t), this also results in high frequency motion artifacts in the lower left hand corner of moving object 401 being enhanced in subframe stream I.sub.E2-2 (t).
(85) While other enhanced content subframe streams are not depicted in either of
(86) Returning to
(87) In any event, each content selection function 319 is generally configured to (e.g. at block 203 of method 200) combine one or more resulting enhanced subframe streams into a respective output subframe stream based on data in one or more regions of video stream I(t), as described hereafter.
(88) In particular, controller 101, at content estimation module 309, can be configured to determine the data in the one or more regions of video stream I(t) by comparing successive frames of video stream I(t), for example a frame at a time “t” (e.g. a frame of video stream I(t)) to a previous frame at a time “t−1” (e.g. a frame of video stream I(t−1)). For example, controller 101 can subtract successive frames and compare regions of the subtracted image to one or more thresholds to determine content of regions.
(89) For example, such a threshold-based comparison can result in controller 101 determining that the region of video stream (t) that includes moving object 401 comprises a moving object region, while region 403 includes static objects and/or non-moving objects and hence comprises a static object region. Such a comparison can hence occur on a pixel-by-pixel basis, and a content map can result.
(90) However, as depicted, the determined regions that include moving objects or static objects can be dilated using a “Dilation” function 320 to expand the regions both for efficiency and, for example, so that edges of moving objects are not erroneously excluded from moving object regions. For example, such a dilation function 320 can cause regions with moving objects, as determined from the comparison, be expanded (e.g. “dilated”) by a given percentage (e.g. 10%, and the like, though a size of such dilation can be provisioned at controller 101, e.g. at application 136).
(91) However, other processes for determining content of video stream I(t) are within the scope of present implementations. For example, while a threshold-based approach to comparing successive frames of video stream I(t) can be used to determine regions where moving objects are located, each frame of video stream I(t) could alternatively be compared to text functions to determine regions of text without reference to other frames. In yet further alternative implementations, each frame of video stream I(t) could alternatively be compared to color functions to determine color regions of video stream I(t).
(92) Furthermore, where conflicts occur, controller 101 (e.g. at content estimation module 309) can be configured to resolve such conflicts, for example using weighting scheme, and the like. For example, moving objects can be given a highest weight such that when regions that include moving objects are also identified as regions of a particular color, such regions can be identified as moving object regions such that motion artifacts can be removed from such regions, as described below, rather than enhance color. Alternatively, overlapping regions can be identified.
(93) In any event, a content map (t) can be output from content estimation module 309 to each of content selection functions 319 such that each content selection function 319 can select regions of enhanced subframe streams to combine into respective output subframe streams.
(94) For example, attention is next directed to
(95) Hence, content map(t) can be received at each of content selection functions 319 to select regions of enhanced subframe streams to combine into respective output subframe streams. As depicted, only regions that include moving objects and static objects have been identified, hence each of content selection functions 319 select content from respective enhanced subframe streams based on whether regions of content map(t) include moving objects or static objects, as described hereafter.
(96) In particular, controller 101 can be further configured to combine a first enhanced subframe stream and a second enhanced subframe stream into a respective output subframe stream based on the data in the one or more regions of video stream I(t) by: determining respective regions where the moving objects and the static objects are located in the video stream I(t); and including corresponding portions of the first enhanced subframe stream in moving object regions and including corresponding portions of the second enhanced subframe stream in static object regions.
(97) For example, attention is next directed to
(98) Furthermore, as depicted, only the pixels of enhanced subframe stream I.sub.E1-1(t) are shown that correspond to region 701 of content map(t), which represents a selection of a moving object portion of enhanced subframe stream I.sub.E1-1(t). In other words, as enhanced subframe stream I.sub.E1-1(t) has been enhanced for moving objects, content selection 319-1 uses content map(t) to select, from enhanced subframe stream I.sub.E1-1(t) portions thereof that include moving objects, but not static objects.
(99) Similarly, only the pixels of enhanced subframe stream I.sub.E1-2(t) are shown that correspond to region 703 of content map(t), which represents a selection of a static object portion of enhanced subframe stream I.sub.E1-2(t). In other words, as enhanced subframe stream I.sub.E1-2(t) has been enhanced for static objects, content selection 319-1 uses content map(t) to select, from enhanced subframe stream I.sub.E1-2(t) portions thereof that include static objects, but not moving objects.
(100) The selected portions of each of enhanced subframe stream I.sub.E1-1(t) and enhanced subframe stream I.sub.E1-2(t) are combined into a respective output subframe stream I.sub.O1(t) having a similar resolution and aspect ratio to subframe stream I.sub.1(t), with similar pixel registrations. Hence, a moving object region of respective output subframe stream I.sub.O1(t) is enhanced for moving objects, and a still object region of respective output subframe stream I.sub.O1(t) is enhanced for still objects. While not depicted, respective output subframe stream I.sub.O1(t) can also be compensated for optical aberrations of projector 107-1 using compensation module 305.
(101) Similarly, attention is attention is next directed to
(102) Furthermore, as depicted, only the pixels of enhanced subframe stream I.sub.E1-2(t) are shown that correspond to region 701 of content map(t), which represents a selection of a moving object portion of enhanced subframe stream I.sub.E1-2(t). In other words, as enhanced subframe stream I.sub.E1-2(t) has been enhanced for moving objects, content selection 319-2 uses content map(t) to select, from enhanced subframe stream I.sub.E1-2(t) portions thereof that include moving objects, but not static objects.
(103) Similarly, only the pixels of enhanced subframe stream I.sub.E1-2(t) are shown that correspond to region 703 of content map(t), which represents a selection of a static object portion of enhanced subframe stream I.sub.E1-2(t). In other words, as enhanced subframe stream I.sub.E1-2(t) has been enhanced for static objects, content selection 319-2 uses content map(t) to select, from enhanced subframe stream I.sub.E1-2(t) portions thereof that include static objects, but not moving objects.
(104) The selected portions of each of enhanced subframe stream I.sub.E1-2(t) and enhanced subframe stream I.sub.E1-2(t) are combined into a respective output subframe stream I.sub.O2(t) having a similar resolution and aspect ratio to subframe stream I.sub.2(t), with similar pixel registrations. Hence, a moving object region of respective output subframe stream I.sub.O2(t) is enhanced for moving objects, and a still object region of respective output subframe stream I.sub.O2(t) is enhanced for still objects. While not depicted, respective output subframe stream I.sub.O2(t) can also be compensated for optical aberrations of projector 107-2 using compensation module 305.
(105) While not depicted, in some implementations content map(t) can includes overlapping regions, for example, a region identified as including a moving object can overlap with a region identified as being of a particular color. In these implementations, an output subframe streams can again be filtered using one or more respective video enhancement filters of video enhancement module 303 and enhanced color regions can be selected from the resulting enhanced subframe streams using content selection functions 319. In other words, aspects of method 200 can be repeated to enhance different types of features in subframe streams I.sub.1(t), I.sub.2(t) which overlap.
(106) Attention is next directed to
(107) In particular,
(108) Hence,
(109) Attention is next directed to
(110) Hence, in the implementations depicted in
(111) However, in implementations where one projector 107 is used to project both of output subframe streams I.sub.O1(t), I.sub.O2(t), respective frames of output subframe streams I.sub.O1(t), I.sub.O2(t) can be projected successively by the one projector 107 such that an eye blends output subframe streams I.sub.O1(t), I.sub.O2(t) together. In some of these implementations, to achieve shifting of output subframe streams I.sub.O1(t), I.sub.O2(t) with respect to one another, the one projector 107 can comprise an opto-mechanical shifter to shift output subframe streams I.sub.O1(t), I.sub.O2(t).
(112) As described above, implementations of block 201 described with respect to
(113) For example, attention is next directed to
(114) Furthermore, values of each pixel of each of subframe streams I′.sub.1(t), I′.sub.2(t) are determined using average values used for pixels at edges of features of video stream I′(t). For example, values of pixels for each of subframe streams I′.sub.1(t), I′.sub.2(t) can be determined by averaging and/or linear averaging the values of respective corresponding pixels of video stream I′ (t). Hence, while more detail of features in video stream I′ (t) occur in subframe streams I′.sub.1(t), I′.sub.2(t) (at least compared to subframe streams I.sub.1(t), I.sub.2(t) of
(115) Once subframe streams I′.sub.1(t), I′.sub.2(t) are generated (e.g. at block 201 of method 200), the remainder of method 200 occurs as described above. Furthermore, when the resulting plurality of output subframe streams are projected at block 207 of method 200, they are projected using corresponding pixel registrations to overlap and combine the plurality of output subframe streams into a higher resolution projected video stream. In other words, the resulting plurality of output subframe streams are projected shifted with respect to one another such that each enhance and fill in details of the others. In particular, a common reference border (corresponding to the common reference border 1210) of each of the plurality of output subframe streams would be aligned such that the features of each overlap to form the higher resolution projected video stream.
(116) Described herein is a system that can increase an apparent displayed and/or projected resolution of high resolution video when rendered by one or more projectors with lower resolutions while managing the resulting motion artifacts. The system can decompose each high resolution video frame into two or more lower resolution subframes that, when superimposed by the projector(s) during a projection, appears with a perceptually higher resolution and contrast gain closer to the original high resolution video content. The subframes are generated by incorporating content-adaptive filters mechanisms which can be based on motion characteristics of the content being displayed to reduce motion artifacts, particularly for moving content with high frequency and fine-grained characteristics, as well as on a projector's optical characteristics. For example, in a particular non-limiting implementation, two WQXGAHD (wide-quad-extended-graphics-array) subframe streams can be produced from a UHD (ultra-high-definition) video stream and the two WQXGA subframe streams can be projected by one or more projectors to produce video stream that appears, to the human eye, at a resolution similar to the original UHD video stream. Furthermore, the present specification can provide resolution enhancement for one or more projectors, while accounting for motion characteristics of the content as well as the optical properties of the projector(s). Resolution enhancement can be achieved with video content that includes both static objects and moving objects; indeed, as the video enhancement filters are applied frame-by-frame, the motion of such objects can be arbitrary and the enhancement thereof can still be achieved. As the video enhancement filters can include a motion video enhancement filter and a still object video enhancement filter which can be combined into one output subframe stream, motion artifacts can be while maintaining and/or enhancing contrast, and the like, of other portions of the output subframe stream.
(117) Those skilled in the art will appreciate that in some implementations, the functionality controller 101 can be implemented using pre-programmed hardware or firmware elements (e.g., application specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), etc.), or other related components. In other implementations, the controller 101 can be achieved using a computing apparatus that has access to a code memory (not shown) which stores computer-readable program code for operation of the computing apparatus. The computer-readable program code could be stored on a computer readable storage medium which is fixed, tangible and readable directly by these components, (e.g., removable diskette, CD-ROM, ROM, fixed disk, USB drive, flash storage and the like, including any hardware component configured to store computer-readable program code in a fixed, tangible, readable manner). Furthermore, it is appreciated that the computer-readable program can be stored as a computer program product comprising a computer usable medium. Further, a persistent storage device can comprise the computer readable program code. It is yet further appreciated that the computer-readable program code and/or computer usable medium can comprise a non-transitory computer-readable program code and/or non-transitory computer usable medium. Alternatively, the computer-readable program code could be stored remotely but transmittable to these components via a modem or other interface device connected to a network (including, without limitation, the Internet) over a transmission medium. The transmission medium can be either a non-mobile medium (e.g., optical and/or digital and/or analog communications lines) or a mobile medium (e.g., microwave, infrared, free-space optical or other transmission schemes) or a combination thereof.
(118) Persons skilled in the art will appreciate that there are yet more alternative implementations and modifications possible, and that the above examples are only illustrations of one or more implementations. The scope, therefore, is only to be limited by the claims appended hereto.