METHOD AND APPARATUS FOR AUTOMATED PLACEMENT OF A SEAM IN A PANORAMIC IMAGE DERIVED FROM MULTIPLE CAMERAS
20180007263 · 2018-01-04
Inventors
- Basavaraja Vandrotti (San Jose, CA, US)
- Hoseok Chang (Sunnyvale, CA, US)
- Per-Ola Robertsson (Sunnyvale, CA, US)
- Devon Copley (Sunnyvale, CA, US)
- Maneli Noorkami (Sunnyvale, CA, US)
- Hui Zhou (Sunnyvale, CA, US)
Cpc classification
H04N5/2628
ELECTRICITY
H04N23/45
ELECTRICITY
G06T3/4038
PHYSICS
International classification
Abstract
A method, apparatus and computer program product are provided to generate a panoramic view derived from multiple cameras and automatically place a seam in that panoramic view in a computationally efficient manner. In regards to a method, images captured by at least two cameras are received. Each camera has a different, but partially overlapping field of view. The method determines a seam location and scale factor to be used when combining the images together to minimize errors at the seam between the two images. In some example implementations, the seam location and scale factor may be recalculated in response to a manual or automatic trigger. In some additional example implementations, motion associated with an image element near a seam location is detected, and the seam location is moved in a direction opposite that of the direction of motion.
Claims
1. A method comprising: receiving an image frame captured by a first camera and an image frame captured by a second camera, wherein the first camera and the second camera have different fields of view, wherein the different fields of view have a mutually overlapping portion; combining the image frame captured by the first camera and the image frame captured by the second camera into a composite image; determining a seam location and a scale factor for the image frame captured by the first camera and the image frame captured by the second camera; and applying the seam location and the scale factor to a plurality of subsequent image frames captured by the first camera and a plurality of subsequent image frames captured by the second camera.
2. A method according to claim 1, wherein determining a seam location and a scale factor comprises: generating a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera; computing an error measurement associated with the seam location for each scaled image frame from amongst the plurality of scaled image frames; and identifying the scale factor based upon the computed error measurement.
3. A method according to claim 1, wherein determining a seam location and a scale factor comprises: generating a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera; dividing the mutually overlapping portion of the fields of view of the image frame captured by the first camera and the image frame captured by the second camera into a plurality of overlapping sections; computing an error measurement associated with each overlapping section of each scaled image frame from amongst the plurality of scaled image frames; and identifying the section and the scale factor based upon the computed error measurement.
4. A method according to claim 1, further comprising: receiving a control signal associated with a trigger for determining a second seam location and a second scale factor for a second image frame captured by the first camera and a second image frame captured by the second camera; and in response to receiving the control signal: determining the second seam location and the second scale factor for the second image frame captured by the first camera and the second image frame captured by the second camera; and applying the second seam location and the second scale factor to a second plurality of subsequent image frames captured by the first camera and a second plurality of subsequent image frames captured by the second camera.
5. A method according to claim 1, further comprising: detecting a set of data associated with motion of an image element, wherein the image element is located within a predetermined distance of the seam location.
6. A method according to claim 5, further comprising: in response to detecting the set of data associated with motion of the image element, determining a direction associated with the motion.
7. A method according to claim 6, further comprising shifting the seam location in a direction opposite the direction associated with the motion.
8. An apparatus comprising at least one processor and at least one memory storing computer program code, the at least one memory and the computer program code configured to, with the processor, cause the apparatus to at least: receive an image frame captured by a first camera and an image frame captured by a second camera, wherein the first camera and the second camera have different fields of view, wherein the different fields of view have a mutually overlapping portion; combine the image frame captured by the first camera and the image frame captured by the second camera into a composite image; determine a seam location and a scale factor for the image frame captured by the first camera and the image frame captured by the second camera; and apply the seam location and the scale factor to a plurality of subsequent image frames captured by the first camera and a plurality of subsequent image frames captured by the second camera.
9. An apparatus according to claim 8, wherein the at least one memory and the computer program code are configured to, with the processor, cause the apparatus to determine a seam location and a scale factor by: generating a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera; computing an error measurement associated with the seam location for each scaled image frame from amongst the plurality of scaled image frames; and identifying the scale factor based upon the computed error measurement.
10. An apparatus according to claim 8, wherein the at least one memory and the computer program code are configured to, with the processor, cause the apparatus to determine a seam location and a scale factor by: generating a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera; dividing the mutually overlapping portion of the fields of view of the image frame captured by the first camera and the image frame captured by the second camera into a plurality of overlapping sections; computing an error measurement associated with each overlapping section of each scaled image frame from amongst the plurality of scaled image frames; and identifying the section and the scale factor based upon the computed error measurement.
11. An apparatus according to claim 8, wherein the at least one memory and the computer program code are configured to, with the processor, cause the apparatus to: receive a control signal associated with a trigger for determining a second seam location and a second scale factor for a second image frame captured by the first camera and a second image frame captured by the second camera; and in response to receiving the control signal: determine the second seam location and the second scale factor for the second image frame captured by the first camera and the second image frame captured by the second camera; and apply the second seam location and the second scale factor to a second plurality of subsequent image frames captured by the first camera and a second plurality of subsequent image frames captured by the second camera.
12. An apparatus according to claim 8, wherein the at least one memory and the computer program code are configured to, with the processor, cause the apparatus to: detect a set of data associated with motion of an image element, wherein the image element is located within a predetermined distance of the seam location.
13. An apparatus according to claim 12, wherein the at least one memory and the computer program code are configured to, with the processor, cause the apparatus to: in response to detecting the set of data associated with motion of the image element, determine a direction associated with the motion.
14. An apparatus according to claim 13, wherein the at least one memory and the computer program code are configured to, with the processor, cause the apparatus to shift the seam location in a direction opposite the direction associated with the motion.
15. A computer program product comprising at least one non-transitory computer-readable storage medium having computer-executable program code instruction stored therein, the computer-executable program code instructions comprising program code instructions configured to: receive an image frame captured by a first camera and an image frame captured by a second camera, wherein the first camera and the second camera have different fields of view, wherein the different fields of view have a mutually overlapping portion; combine the image frame captured by the first camera and the image frame captured by the second camera into a composite image; determine a seam location and a scale factor for the image frame captured by the first camera and the image frame captured by the second camera; and apply the seam location and the scale factor to a plurality of subsequent image frames captured by the first camera and a plurality of subsequent image frames captured by the second camera.
16. A computer program product according to according to claim 15, wherein the program code instructions configured to determine a seam location and a scale factor comprise program code instructions configured to: generate a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera; compute an error measurement associated with the seam location for each scaled image frame from amongst the plurality of scaled image frames; and identify the scale factor based upon the computed error measurement.
17. A computer program product according to according to claim 15, wherein the program code instructions configured to determine a seam location and a scale factor comprise program code instructions configured to: generate a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera; divide the mutually overlapping portion of the fields of view of the image frame captured by the first camera and the image frame captured by the second camera into a plurality of overlapping sections; compute an error measurement associated with each overlapping section of each scaled image frame from amongst the plurality of scaled image frames; and identify the section and the scale factor based upon the computed error measurement.
18. A computer program product according to according to claim 15, wherein the computer-executable program code instructions further comprise program code instructions configured to: receive a control signal associated with a trigger for determining a second seam location and a second scale factor for a second image frame captured by the first camera and a second image frame captured by the second camera; and in response to receiving the control signal: determine the second seam location and the second scale factor for the second image frame captured by the first camera and the second image frame captured by the second camera; and apply the second seam location and the second scale factor to a second plurality of subsequent image frames captured by the first camera and a second plurality of subsequent image frames captured by the second camera.
19. A computer program product according to according to claim 15, wherein the computer-executable program code instructions further comprise program code instructions configured to: detect a set of data associated with motion of an image element, wherein the image element is located within a predetermined distance of the seam location.
20. A computer program product according to according to claim 19, wherein the computer-executable program code instructions further comprise program code instructions configured to: in response to detecting the set of data associated with motion of the image element, determine a direction associated with the motion; and shift the seam location in a direction opposite the direction associated with the motion.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] Having thus described certain example embodiments of the present disclosure in general terms, reference will hereinafter be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
DETAILED DESCRIPTION
[0034] Some embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.
[0035] Additionally, as used herein, the term ‘circuitry’ refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term ‘circuitry’ also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term ‘circuitry’ as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device.
[0036] As defined herein, a “computer-readable storage medium,” which refers to a non-transitory physical storage medium (e.g., volatile or non-volatile memory device), can be differentiated from a “computer-readable transmission medium,” which refers to an electromagnetic signal.
[0037] A method, apparatus and computer program product are provided in accordance with an example embodiment in order to efficiently generate a panoramic view, such as for use in conjunction with virtual reality or other applications. In this regard, a panoramic view is generated by combining images captured by a plurality of cameras arranged in an array, such that portions of images captured by adjacent cameras within the array overlap with each other and may be stitched or otherwise combined together. Through the application of several techniques, the coordination between two adjacent images can be improved, resulting in an enhanced viewing experience for a viewer. Moreover, the panoramic view may be generated in an efficient manner, both in terms of the processing resources consumed during the generation of the panoramic view and the time required to generate the panoramic view. As a result, the panoramic view may, in some instances, be generated in real time or near real time relative to the capture of the images that at least partially comprise the panoramic view.
[0038] In some example implementations, the coordination of two adjacent images is achieved by searching a two-dimensional space of possible options to identify a configuration of seam location, scale factor, or both, that results in a minimum error for a given frame. In some contexts, including but not limited to contexts where the location of a seam between two adjacent images is fixed, a variety of convergence depths between two adjacent cameras are evaluated by applying a plurality of scale factors to an image and calculating an error associated with each scale factor. The scale factor associated with the minimum error is then selected, and applied to subsequent frames captured by the particular adjacent cameras. In some contexts, including but not limited to contexts where the location of a seam between two adjacent images is not fixed, the overlapping area of images captured by the adjacent cameras is divided into a series of columns or other sections, and an error associated with each column is calculated. The column or other section with the minimum error is then selected as the seam location, and applied to subsequent frames captured by the particular adjacent cameras. In some contexts, including but not limited to contexts where motion is detected in content near a seam location, the seam location can be moved on a per-frame basis in response to the motion. Regardless of the context in which the seam location and/or scale factor is selected and applied, the selection and application of a seam location and/or scale factor may be performed and/or re-performed in response to a manual and/or automatic trigger.
[0039] Some example implementations contemplate the use of devices suitable for capturing images used in virtual reality and other immersive content environments, such as Nokia's OZO system, where multiple cameras are placed in an array such that each camera is aimed in a particular direction to capture a particular field of view. Particularly in contexts involving live stitching, it is necessary to stitch the images received from each camera in real time or near real time. In such a scenario, solutions for real time or near real time stitching involve the use of camera calibration data. The camera calibration data can be used to generally determine the placement of each camera, and generate a transformation matrix that can be used to stitch multiple images together to form a panoramic view, such as a 360° image. Camera calibration is typically performed in a manner directed toward infinite scene location. As a result, objects located relatively near the camera(s) will be subject to parallax effects, which compound the difficulty associated with stitching the images together. While such stitching may be accomplished using time-intensive and processor-resource intensive techniques, such techniques are incompatible with the timing requirements associated with live stitching and/or other practical considerations associated with the camera array and its processing capabilities. In contrast, the techniques disclosed herein are viable in live stitching contexts, particularly in resource-constrained situations. Moreover, implementations of the techniques disclosed herein have provided for a significant reduction of visible stitching errors under a wider range of input than conventional techniques used to achieve real time or near real time performance at typical resolutions on reasonable hardware.
[0040] The panoramic view that is generated in accordance with an example embodiment of the present invention is based upon images captured by at least two cameras. In the embodiment depicted in
[0041] As shown in
[0042] While the embodiment depicted in
[0043] Based upon the images captured by the cameras 10, a panoramic view is generated. In this regard, the panoramic view may be generated by an apparatus 20 as depicted in
[0044] Regardless of the manner in which the apparatus 20 is embodied, the apparatus of an example embodiment is configured to include or otherwise be in communication with a processor 22 and a memory device 24 and optionally the user interface 26 and/or a communication interface 28. In some embodiments, the processor (and/or co-processors or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory device via a bus for passing information among components of the apparatus. The memory device may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory device may be an electronic storage device (e.g., a computer readable storage medium) comprising gates configured to store data (e.g., bits) that may be retrievable by a machine (e.g., a computing device like the processor). The memory device may be configured to store information, data, content, applications, instructions, or the like for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present invention. For example, the memory device could be configured to buffer input data for processing by the processor. Additionally or alternatively, the memory device could be configured to store instructions for execution by the processor.
[0045] As described above, the apparatus 20 may be embodied by a computing device. However, in some embodiments, the apparatus may be embodied as a chip or chip set. In other words, the apparatus may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. The apparatus may therefore, in some cases, be configured to implement an embodiment of the present invention on a single chip or as a single “system on a chip.” As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein.
[0046] The processor 22 may be embodied in a number of different ways. For example, the processor may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally or alternatively, the processor may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading.
[0047] In an example embodiment, the processor 22 may be configured to execute instructions stored in the memory device 24 or otherwise accessible to the processor. Alternatively or additionally, the processor may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present invention while configured accordingly. Thus, for example, when the processor is embodied as an ASIC, FPGA or the like, the processor may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor is embodied as an executor of software instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor may be a processor of a specific device (e.g., a pass-through display or a mobile terminal) configured to employ an embodiment of the present invention by further configuration of the processor by instructions for performing the algorithms and/or operations described herein. The processor may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor.
[0048] In some embodiments, the apparatus 20 may optionally include a user interface 26 that may, in turn, be in communication with the processor 22 to provide output to the user and, in some embodiments, to receive an indication of a user input. As such, the user interface may include a display and, in some embodiments, may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. Alternatively or additionally, the processor may comprise user interface circuitry configured to control at least some functions of one or more user interface elements such as a display and, in some embodiments, a speaker, ringer, microphone and/or the like. The processor and/or user interface circuitry comprising the processor may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor (e.g., memory device 24, and/or the like).
[0049] The apparatus 20 may optionally also include the communication interface 28. The communication interface may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device or module in communication with the apparatus. In this regard, the communication interface may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface may alternatively or also support wired communication. As such, for example, the communication interface may include a communication modem and/or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms.
[0050] Referring now to
[0051] The apparatus includes means, such as the processor 22, the memory 24, the communication interface 28 or the like, for receiving an image frame captured by a first camera and an image frame captured by a second camera, wherein the first camera and the second camera have different fields of view, wherein the different fields of view have a mutually overlapping portion. For example, and with reference to block 32 of
[0052] The apparatus also includes means, such as the processor 22, the memory 24, the communication interface 28 or the like, for combining the image frame captured by the first camera and the image frame captured by the second camera into a composite image. For example, and with reference to block 34 of
[0053] When the neighboring images are selected and combined, such as in example implementations of block 34, differences between the images in their mutually overlapping portions may be visible to a viewer and/or otherwise undesirable. Consequently, establishing and positioning a seam between the two images that minimizes such differences is desirable, and can improve the experience of a viewer, particularly a viewer who is seeking an immersive experience associated with a virtual reality viewing system. In some contexts, the seam established between two neighboring images will be fixed in a predetermined position with respect to the images for all such neighboring frames. However, in other contexts, the location of the seam will not be fixed in a particular location for all neighboring frames, and can be set, such as by apparatus 20, on a frame-by-frame basis or in accordance with any other protocol. Regardless of whether the seam is in a fixed location or not, the seam itself may take any of a number of configurations. For example, a seam may be configured as a vertical line. In other examples the seam may take the form of an arc or any other shape, including but not limited to an optimized shape.
[0054] The apparatus 20 also includes means, such as the processor 22, the memory 24, the communication interface 28 or the like, for determining a seam location and a scale factor for the image frame captured by the first camera and the image frame captured by the second camera. For example, and with reference to
[0055] In some example implementations, the apparatus 20 also includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for generating a plurality of scaled image frames by applying a plurality of scale factors to the image frame captured by at least one of the first camera or the second camera. For example, and with reference to block 38 of
[0056]
[0057] While camera 52 and camera 54 are arranged such that there is an overlapping portion of their respective fields of view and the images captured by camera 52 and camera 54 have mutually overlapping portions, the orientation and/or configuration of camera 52 and/or camera 54 may be such that the appearance of image elements common to images captured by camera 52 and camera 54 may be subject to parallax, differences in size, and other visibly perceptible differences. As shown in
[0058] With reference again to block 38 in
[0059] The apparatus 20 also includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for identifying the scale factor based upon the computed error measurement. For example, and with reference to blocks 38 and 40 of
[0060] As shown in block 48, the apparatus 20 also includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for determining whether there are any additional cameras in a camera array for which a scale factor has not been calculated. As shown in
[0061] The apparatus 20 includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for applying the seam location and the scale factor to a plurality of subsequent image frames captured by the first camera and a plurality of subsequent image frames captured by the second camera. For example, and with reference to block 50 of
[0062] In contexts and/or example implementations where the seam location is not fixed, process 30 in
[0063] Unlike example implementations that arise in contexts where the seam location between two images is fixed in advance, the computation of error associated with each scaled image need not be tied to a single region associated with the seam. The apparatus 20 also includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for dividing the mutually overlapping portion of the fields of view of the image frame captured by the first camera and the image frame captured by the second camera into a plurality of overlapping sections. The apparatus 20 also includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for computing an error measurement associated with each overlapping section of each scaled image frame from amongst the plurality of scaled image frames. For example, and as shown in block 44 of
[0064] The apparatus 20 also includes means, such as the processor 22, memory 24, the communication interface 28, or the like, for identifying the section and the scale factor based upon the computed error measurement. For example, and as shown in block 46 of
[0065] Referring now to
[0066] The apparatus also includes means, such as the processor 22, the memory 24, the communication interface 28 or the like, for, in response to receiving the control signal, determining the second seam location and the second scale factor for the second image frame captured by the first camera and the second image frame captured by the second camera. For example, and as depicted in block 64 of
[0067] The apparatus 20 includes also means, such as the processor 22, the memory 24, the communication interface 28 or the like, for applying the second seam location and the second scale factor to a second plurality of subsequent image frames captured by the first camera and a second plurality of subsequent image frames captured by the second camera. For example, as shown in
[0068] Implementations of process 60 may be particularly advantageous in situations where the position of a camera within an array and/or an entire camera array changes, such that a previously calculated seam location and/or scale factor may cease to be optimal or acceptable. Moreover in situations where the trigger may be generated by a viewer of a 360° video stream, the recalculation of the seam location and/or scale factor may be particularly beneficial where the user is focused on content at or near the seam location, such that the recalculation and/or relocation of the seam may improve the viewer's experience.
[0069] Referring now to
[0070] The apparatus 20 may also include means, such as the processor 22, the memory 24, and the communication interface 28, or the like, for in response to detecting the set of data associated with motion of the image element, determining a direction associated with the motion and, in some instances, shifting the seam location in a direction opposite the direction associated with the motion. For example, and with reference to block 78 of
[0071] As shown in
[0072] As described above,
[0073] Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.
[0074] In some embodiments, certain ones of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, additions, or amplifications to the operations above may be performed in any order and in any combination.
[0075] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.