Distortion Compensation in Video Image Stabilization
20220394180 · 2022-12-08
Inventors
- Zejing Wang (Mountain View, CA, US)
- Florian Ciurea (Campbell, CA, US)
- Benjamin A. Darling (Cupertino, CA, US)
Cpc classification
H04N23/81
ELECTRICITY
H04N23/683
ELECTRICITY
H04N25/61
ELECTRICITY
International classification
Abstract
Devices, methods, and non-transitory program storage devices are disclosed to provide techniques for distortion compensation in images that are subject to video image stabilization (VIS), e.g., electronic image stabilization (EIS). In some embodiments, the techniques comprise: obtaining a first image captured by a first image capture device; determining a first set of image parameters configured to apply a first distortion operation (e.g., an un-distortion operation) to the first image; determining a second set of image parameters configured to apply a first stabilization operation to a first version of the first image that has already had the first distortion operation applied; determining a third set of image parameters configured to apply a second distortion operation (e.g., an idealized, field of view-preserving, re-distortion operation) to the first image; and applying the determined first, second, and third sets of image parameters to the first image to generate a distortion-compensated and stabilized output image.
Claims
1. A device, comprising: a first image capture device, wherein the first image capture device comprises a first lens having a first set of lens characteristics; a display; a memory; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to: obtain a first image captured by the first image capture device; determine a first set of image parameters configured to apply a first distortion operation to the first image; determine a second set of image parameters configured to apply a first stabilization operation to a first version of the first image that has already had the first distortion operation applied; and apply the determined first set of image parameters and the determined second set of image parameters to the first image to generate a first output image, wherein the first output image is distortion-compensated and stabilized.
2. The device of claim 1, wherein the first distortion operation is configured to remove distortion in images captured by the first image capture device.
3. The device of claim 2, wherein the first distortion operation is based, at least in part, on the first set of lens characteristics.
4. The device of claim 3, wherein the first distortion operation is further based, at least in part, on one or more of: a focus position of the first lens during the capture of the first image; a focal length of the first lens during the capture of the first image; or an Optical Image Stabilization (OIS) system position of the first lens during the capture of the first image.
5. The device of claim 1, wherein the first set of lens characteristics are determined as part of a factory calibration process or based on design specifications of the first lens.
6. The device of claim 1, wherein the instructions causing the one or more processors to apply the determined first set of image parameters and second set of image parameters to the first image to generate a first output image further comprise instructions configured to cause the one or more processors to: apply the determined first set of image parameters and second set of image parameters to the first image in a single pass operation.
7. The device of claim 1, wherein the instructions further comprise instructions causing the one or more processors to: determine a third set of image parameters configured to apply a second distortion operation to a second version of the first image that has already had the first distortion operation and the first stabilization operation applied, wherein the instructions to apply the determined first set of image parameters and second set of image parameters to the first image further comprise instructions to: apply the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image to generate the first output image.
8. The device of claim 7, wherein the first distortion operation is configured to remove distortion in images captured by the first image capture device, and wherein the second distortion operation is configured to re-apply an idealized distortion pattern aligned to the first output image.
9. The device of claim 8, wherein the instructions causing the one or more processors to apply the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image to generate a first output image further comprise instructions configured to cause the one or more processors to: apply the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image in a single pass operation.
10. The device of claim 1, wherein the instructions causing the one or more processors to determine a first set of image parameters configured to apply a first distortion operation to the first image further comprise instructions configured to cause the one or more processors to: determine a first two-dimensional (2D) mesh of vertex points distributed over the first image; and determine a set of one more distortion parameters for each vertex point in the first 2D mesh of vertex points, wherein the first set of image parameters comprises the determined set of one more distortion parameters for each vertex point in the first 2D mesh of vertex points.
11. The device of claim 10, wherein the instructions causing the one or more processors to determine a first two-dimensional (2D) mesh of vertex points distributed over the first image further comprise instructions configured to cause the one or more processors to: determine a density for the vertex points in the 2D mesh of vertex points, wherein the determined density is based, at least in part, on an estimated amount of distortion in images captured by the first image capture device.
12. The device of claim 11, wherein the determined density comprises a non-uniform density across the 2D mesh of vertex points.
13. The device of claim 1, wherein the instructions causing the one or more processors to determine a first set of image parameters configured to apply a first distortion operation to the first image further comprise instructions configured to cause the one or more processors to: determine a set of one more distortion parameters for each pixel in the first output image, wherein the first set of image parameters comprises the determined set of one more distortion parameters for each pixel in the first output image.
14. The device of claim 1, wherein the first distortion operation is based on approximating a rectilinear projection for the first image.
15. The device of claim 1, wherein the first distortion operation is based on approximating a curvilinear projection for the first image, and wherein the first distortion operation is further configured to remove curvilinear distortion in images captured by the first image capture device.
16. The device of claim 1, wherein the instructions further comprise instructions causing the one or more processors to: crop the first output image to have a first set of desired dimensions or a first desired aspect ratio.
17. A non-transitory computer readable medium comprising computer readable instructions executable by one or more processors to: obtain a first image captured by a first image capture device, wherein the first image capture device comprises a first lens having a first set of lens characteristics; determine a first set of image parameters configured to apply a first distortion operation to the first image; determine a second set of image parameters configured to apply a first stabilization operation to a first version of the first image that has already had the first distortion operation applied; and apply the determined first set of image parameters and the determined second set of image parameters to the first image to generate a first output image, wherein the first output image is distortion-compensated and stabilized.
18. The non-transitory computer readable medium of claim 17, wherein the instructions further comprise instructions executable by the one or more processors to: determine a third set of image parameters configured to apply a second distortion operation to a second version of the first image that has already had the first distortion operation and the first stabilization operation applied, wherein the instructions to apply the determined first set of image parameters and second set of image parameters to the first image further comprise instructions to: apply the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image to generate the first output image.
19. The non-transitory computer readable medium of claim 18, wherein the first distortion operation is configured to remove distortion in images captured by the first image capture device, and wherein the second distortion operation is configured to re-apply an idealized distortion pattern aligned to the first output image.
20. An image processing method, comprising: obtaining a first image captured by a first image capture device, wherein the first image capture device comprises a first lens having a first set of lens characteristics; determining a first set of image parameters configured to apply a first distortion operation to the first image; determining a second set of image parameters configured to apply a first stabilization operation to a first version of the first image that has already had the first distortion operation applied; and applying the determined first set of image parameters and the determined second set of image parameters to the first image to generate a first output image, wherein the first output image is distortion-compensated and stabilized.
21. The method of claim 20, further comprising: determining a third set of image parameters configured to apply a second distortion operation to a second version of the first image that has already had the first distortion operation and the first stabilization operation applied, wherein applying the determined first set of image parameters and second set of image parameters to the first image further comprises: applying the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image to generate the first output image.
22. The method of claim 21, wherein the first distortion operation is configured to remove distortion in images captured by the first image capture device, and wherein the second distortion operation is configured to re-apply an idealized distortion pattern aligned to the first output image.
23. An image processing method, comprising: obtaining a first image captured by a first image capture device, wherein the first image capture device comprises a first lens having a first set of lens characteristics; generating a first output image from the first image by performing the following operations on the first image: a first distortion operation; a first stabilization operation using an output of the first distortion operation; and a second distortion operation using an output of the first stabilization operation.
24. The method of claim 23, wherein the output of the first distortion operation comprises an undistorted version of the first image.
25. The method of claim 24, wherein the output of the first stabilization operation comprises a stabilized and undistorted version of the first image.
26. The method of claim 25, wherein the second distortion operation is configured to re-apply an idealized distortion pattern that is aligned to the first output image to the stabilized and undistorted version of the first image.
27. The method of claim 23, wherein the output of the first distortion operation comprises a first set of image parameters configured to apply the first distortion operation to the first image.
28. The method of claim 27, wherein the output of the first stabilization operation comprises a second set of image parameters configured to apply the first stabilization operation to a first version of the first image that has already had the first distortion operation applied.
29. The method of claim 28, wherein performing the second distortion operation comprises: determining a third set of image parameters configured to apply the second distortion operation to a second version of the first image that has already had the first distortion operation and the first stabilization operation applied.
30. The method of claim 29, wherein performing the second distortion operation further comprises: applying the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image to generate the first output image.
31. The method of claim 29, wherein the second distortion operation is configured to re-apply an idealized distortion pattern that is aligned to the first output image to the second version of the first image that has already had the first distortion operation and the first stabilization operation applied.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
[0013]
[0014]
[0015]
DETAILED DESCRIPTION
[0016] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the inventions disclosed herein. It will be apparent, however, to one skilled in the art that the inventions may be practiced without these specific details. In other instances, structure and devices are shown in block diagram form in order to avoid obscuring the inventions. References to numbers without subscripts or suffixes are understood to reference all instance of subscripts and suffixes corresponding to the referenced number. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, and, thus, resort to the claims may be necessary to determine such inventive subject matter. Reference in the specification to “one embodiment” or to “an embodiment” (or similar) means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of one of the inventions, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
[0017] Exemplary Distortion Compensation and Electronic Image Stabilization Operations
[0018] Turning now to
[0019] Following arrow 103.sub.1 to the right from image 102.sub.0, an exemplary mesh 104 for applying a distortion operation (in this case an un-distortion operation) that could be applied to images captured by the first lens of the image capture device is illustrated. It is to be understood that the distortion operations described herein may be implemented as a set of per-pixel warping operations, a set of pixel warping operations specified for the vertex points of a mesh that may be interpolated between for image pixels not located at the mesh vertex points, estimated global homography matrices, and the like. In some cases, the values embodied in mesh 104 may be determined as part of a factory calibration process for a given lens or simply be determined based on provided lens design specifications for the given lens. The use of exemplary mesh 104 in this example is merely intended to be illustrative of the potential effects a distortion operation (e.g., an “un-distortion” operation) could have on a captured image's scene content and/or FOV.
[0020] Following arrow 103.sub.2 to the right from exemplary mesh 104, an exemplary undistorted version 106.sub.0 of the image 102.sub.0, captured at a time T=0, is shown. In this example, as illustrated by arrows 107, the un-distortion operation applied by exemplary mesh 104 has removed the aforementioned unnatural elongated look of the human subject's face, making the overall height of the face somewhat smaller—and more natural-looking. However, as a consequence of the warping applied by exemplary mesh 104 in this example, and as illustrated by arrows 117, the undistorted version 106.sub.0 had to be cropped slightly to return to a rectangular shape, thus resulting in a slightly reduced FOV from that of image 102.sub.0. As will be appreciated, depending on the lens distortion pattern of a given lens, the amount of FOV lost in an undistorted image frame may or may not be objectionable in a given implementation.
[0021] Turning now to arrow 108.sub.1, pointing down from image 102.sub.0, a second image, image 102.sub.1, captured at a subsequent time, T=1, is shown. In this example, as indicated by arrow 109, a user of the image capture device has apparently rotated the image capture device by 10 degrees to the right between the capture of image 102.sub.0 at T=0 and image 102.sub.1 at T=1. Thus, input image 102.sub.1 represents a distorted and unstabilized image frame captured at T=1. As is illustrated by distortion axis arrow 101.sub.1, the rotation of the image capture device at T=1 has also caused the direction of the distortion to rotate with respect to the captured scene content. In other words, although the lens distortion pattern in this example continues to manifest in an elongated or stretched look through the center of the captured image frame, because the image capture device has been rotated, the distortion axis now no longer aligns exactly with the central axis of the nose of the human subject, and instead is rotated by the aforementioned ten degrees with respect to the human subject's face as captured in image 102.sub.1.
[0022] Next, following arrow 113.sub.1 to the right from image 102.sub.1, in the example 100, a set of electronic image stabilization operations 110.sub.1 are determined, i.e., in order to stabilize the scene content of image 102.sub.1 with respect to image 102.sub.0. As illustrated in
[0023] Following arrow 113.sub.2 to the right from image stabilization operations 110.sub.1, a distorted, but stabilized, output image frame at T=1 (112.sub.1) is shown that exhibits one or more so-called wobble artifacts 116, which may present themselves visually as areas of “wobbling” in the stabilized video sequences, wherein, e.g., portions of rigid structures in the captured video image frames appear to be pulsing or moving in localized and seemingly arbitrary patterns during playback of the stabilized video. As shown in output image 112.sub.1, the distortion axis arrow 101.sub.0, introduced above with respect to image 102.sub.0, and the distortion axis arrow 101.sub.1, are offset from each other by a rotation angle 127, due to the rotation of the image capture device between time T=0 and time T=1 in this example. As may now be appreciated, in this example 100, the lack of an un-distortion operation being applied to input image 102.sub.1 results in the there being a lens distortion pattern present in the individual captured images in the video sequence, which will move around a small amount from frame to frame with respect to the scene content as the video is stabilized (e.g., as illustrated by rotation angle 127), thereby producing the wobble artifacts. The wobble artifacts may also be caused, in part, by the stabilization operation attempting to stabilize portions of the capturing lens having differing physical lens characteristics from one another. In other words, while the EIS operation is stabilizing the captured scene content in the image, it is also simultaneously causing the lens distortion pattern to become de-stabilized—resulting in the potential for the unwanted wobble artifacts 116 in the stabilized video sequence.
[0024] One solution to this issue of the lens distortion pattern becoming destabilized with respect to the lens's optical center may be to perform the distortion compensation operations on captured images prior to the performance of electronic image stabilization operations. In this way, because there is no longer any distortion in the captured images, there is no longer slight movements in the distortion pattern from frame to frame in the video sequence, which has been found to reduce or eliminate the presence of wobble artifacts.
[0025] Turning now to
[0026] As explained above, in example 150, following arrow 153.sub.1 to the right from input image 152.sub.1, rather than applying an electronic image stabilization operation, first, an exemplary distortion warping mesh 118 may be applied to the input image 152.sub.1 that is aligned with the input image 152.sub.1. In some embodiments, the warp parameters applied by mesh 118 may be computed for each pixel that will exist in the output image— but using distortion parameters from the input image. In some cases, this may be done by using EIS warp parameters (e.g., as determined by electronic image stabilization operations 160.sub.1, discussed below) to map between output image pixels and input image pixels, thereby allowing distortion parameters to be computed at each exact pixel location in the output image. As illustrated, regions 154.sub.1 and 154.sub.2 reflect regions of the distorted image frame that will not be present in the un-distorted image frame, i.e., as a result of the un-distortion operation being applied. Also present in this mesh 118 is an annotation of a dashed-line crop region 155, which illustrates the extent to which the FOV of the undistorted and stabilized output image frame at T=1 (162.sub.1) may be reduced with respect to the FOV of the original or “unprocessed” distorted and unstabilized image frame at T=1 (152.sub.1), e.g., if no subsequent “re-distortion” operation is applied to the image data, as will be discussed in greater detail below.
[0027] Next, following arrow 153.sub.2 to the right from mesh 118, in the example 150, a set of electronic image stabilization operations 160.sub.1 are determined to stabilize the scene content of image 152.sub.1 with respect to image 102.sub.0. As with the example 100 illustrated in
[0028] As mentioned above, the FOV of the undistorted and stabilized output image frame at T=1 (162.sub.1) may be reduced with respect to the FOV of the original or “unprocessed” distorted and unstabilized image frame at T=1 (152.sub.1) if no subsequent “re-distortion” operation is applied to the image data, i.e., reduced to the FOV indicated by crop region 155. If maintaining an equivalent FOV in the undistorted and stabilized image frame is important to a given implementation, then, following arrow 153.sub.3 to the right from electronic image stabilization operations 160.sub.1, in some embodiments, a re-distortion operation (e.g., as illustrated by mesh 164) may optionally be applied to the stabilized version of image 152.sub.1. In some embodiments, the re-distortion operation may be configured to add back in an idealized lens distortion that is centered at (and aligned with) the output image frame 162.sub.1's center, i.e., as opposed to at the input image frame's center. This re-distortion operation has the result of effectively shifting the distortion's optical center from the center of the input image to the center of the output image. Because the lens distortion pattern is fixed to the stabilized output image, i.e., rather than to the unstabilized input image, the appearance of the wobble artifacts may be mitigated or eliminated entirely. The re-distortion operation may also have the effect of restoring the FOV of the output image frame to the original input image frame's FOV. (Note: The optional nature of the application of the re-distortion operation applied by mesh 164 is indicated by the dashed line box around mesh 164 in
[0029] Finally, following arrow 153.sub.4 to the right from mesh 164, a re-distorted and stabilized image frame at T=1 (162.sub.1) may be generated, with an FOV similar (or equal) to that of the original or “unprocessed” distorted and unstabilized input image frame at T=1 (152.sub.1). Due, at least in part, to the removal of the lens distortion prior to the stabilization operations 160.sub.1, when the idealized distortion is re-applied by mesh 164, the direction of distortion axis arrow 114.sub.1B remains the same as that of distortion axis arrow 101.sub.1, with respect to both the input image frame and the captured scene content (which has been stabilized between T=0 and T=1). As may now be appreciated, while the output image 112.sub.1 in the example 100 of
[0030] Exemplary Methods for Performing Distortion Compensation in Electronic Image Stabilization Operations
[0031]
[0032] Turning first to
[0033] Next, at Step 212, the method 200 may determine a second set of image parameters configured to apply a first stabilization operation to a first version of the first image that has already had the first distortion operation applied. In some cases, the first stabilization may comprise one or more EIS operations, e.g., operations wherein a warping is determined for each pixel (or groups of pixels) in a given image frame in such a way that the resulting sequence of video image frames produces a stabilized video (i.e., subject to the availability of a sufficient amount of overscan pixels around the periphery of the displayed portion of the captured image to accommodate the determined warpings).
[0034] Finally, at Step 214, the method 200 may apply the determined first set of image parameters and the determined second set of image parameters to the first image (e.g., in a single pass operation) to generate a first output image, wherein the first output image is distortion-compensated and stabilized. Although not necessary, applying the first and second sets of parameters in a single pass operation, e.g., by submitting the first and second sets of image parameters to a processing unit (e.g., a central processing unit (CPU) or graphics processing unit (GPU)) together, such as in the form of a single 2D mesh of values for a set of vertex points distributed across the extent of the first output image (or a value for each pixel in the first output image), may save additional processing resources. For example, in some embodiments, the values submitted to the processing unit may reflect the resulting manipulations that need to be applied to the corresponding first output image pixels based on the application of the first distortion operation and then the first stabilization operation in sequence—while only requiring a single pass of modifications to the first output image's actual pixel data.
[0035] While the above techniques of method 200 should provide for a completely undistorted and stabilized image (at least to the extent possible, based on the image capture device's characteristics and the scene being captured) that does not reflect any so-called ‘wobble’ stabilization artifacts (which, e.g., as described above, may be caused by attempting to undistort an image after it has been stabilized, thereby resulting in the un-distortion operations not being applied to the actual optical center of the captured first image), one tradeoff may be a reduced field of view (FOV) in the first output image, e.g., as compared to the first image, e.g., owing to the fact that the un-distortion operations may cause the first output image to have a non-rectangular shape, which may, in some implementations, need to be cropped down to a rectangular shape before being displayed or sent to another application for display or use. Thus, according to some embodiments, e.g., as discussed below with reference to
[0036] Turning now to
[0037] Next, at Step 256, the method 250 may apply the determined first set of image parameters, second set of image parameters, and third set of image parameters to the first image (again, e.g., in a single pass operation, in order to conserve processing resources) to generate a first output image, wherein the first output image is distortion-compensated and stabilized. As with the discussion of Step 214 of
[0038] Turning now to
[0039] Next at Step 266, the operations of Step 204 may further comprise determining a set of one more distortion parameters for each vertex point in the first 2D mesh of vertex points, wherein the first set of image parameters determined as part of Step 204 comprises the determined set of one more distortion parameters for each vertex point in the first 2D mesh of vertex points. These parameters may then be passed on for the performance of Step 212 in
[0040] Turning now to
[0041] It is to be understood that the optional implementations details described above in reference to
[0042] While it may be preferable to have the same number of image parameter values in each of the determined first, second, and third image parameter sets—and to have those image parameter values be aligned at the same locations across the extent of the first output image—if the numbers and/or locations of the determined image parameters for any of the first, second, or third sets do not initially align, appropriate interpolation (or other estimation) methods may be employed to ensure that aligning image parameter values are determined for each of the first, second, and third sets, such that a single mesh (or other desired array of values) of combined warping values, i.e., values reflecting the combined performance of the first distortion operation (e.g., an un-distortion operation), the first stabilization operation and, if desired, the second distortion operation (e.g., a re-distortion operation), may be submitted to a selected processing unit (e.g., a CPU or GPU) together for the performance of the image pixel warpings in a single pass operation.
[0043] Exemplary Electronic Computing Devices
[0044] Referring now to
[0045] Processor 305 may execute instructions necessary to carry out or control the operation of many functions performed by electronic device 300 (e.g., such as the generation and/or processing of images in accordance with the various embodiments described herein). Processor 305 may, for instance, drive display 310 and receive user input from user interface 315. User interface 315 can take a variety of forms, such as a button, keypad, dial, a click wheel, keyboard, display screen and/or a touch screen. User interface 315 could, for example, be the conduit through which a user may view a captured video stream and/or indicate particular image frame(s) that the user would like to capture (e.g., by clicking on a physical or virtual button at the moment the desired image frame is being displayed on the device's display screen). In one embodiment, display 310 may display a video stream as it is captured while processor 305 and/or graphics hardware 320 and/or image capture circuitry contemporaneously generate and store the video stream in memory 360 and/or storage 365. Processor 305 may be a system-on-chip (SOC) such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs). Processor 305 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores. Graphics hardware 320 may be special purpose computational hardware for processing graphics and/or assisting processor 305 perform computational tasks. In one embodiment, graphics hardware 320 may include one or more programmable graphics processing units (GPUs) and/or one or more specialized SOCs, e.g., an SOC specially designed to implement neural network and machine learning operations (e.g., convolutions) in a more energy-efficient manner than either the main device central processing unit (CPU) or a typical GPU, such as Apple's Neural Engine processing cores.
[0046] Image capture device 350 may comprise one or more camera units configured to capture images, e.g., images which may be processed to generate distortion-compensated and stabilized versions of said captured images, e.g., in accordance with this disclosure. Output from image capture device 350 may be processed, at least in part, by video codec(s) 355 and/or processor 305 and/or graphics hardware 320, and/or a dedicated image processing unit or image signal processor incorporated within image capture device 350. Images so captured may be stored in memory 360 and/or storage 365. Memory 360 may include one or more different types of media used by processor 305, graphics hardware 320, and image capture device 350 to perform device functions. For example, memory 360 may include memory cache, read-only memory (ROM), and/or random access memory (RAM). Storage 365 may store media (e.g., audio, image and video files), computer program instructions or software, preference information, device profile information, and any other suitable data. Storage 365 may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory 360 and storage 365 may be used to retain computer program instructions or code organized into one or more modules and written in any desired computer programming language. When executed by, for example, processor 305, such computer program code may implement one or more of the methods or processes described herein. Power source 375 may comprise a rechargeable battery (e.g., a lithium-ion battery, or the like) or other electrical connection to a power supply, e.g., to a mains power source, that is used to manage and/or provide electrical power to the electronic components and associated circuitry of electronic device 300.
[0047] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.