Method and apparatus for managing robot system
11577400 · 2023-02-14
Assignee
Inventors
Cpc classification
B25J9/161
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1664
PERFORMING OPERATIONS; TRANSPORTING
G05B2219/39057
PHYSICS
B25J13/089
PERFORMING OPERATIONS; TRANSPORTING
International classification
B25J13/08
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Embodiments of the present disclosure provide methods for managing a robot system. In one method, orientations for links in the robot system may be obtained when the links are arranged in at least one posture, here each of the orientations indicates a direction pointed by one of the links. At least one image of an object placed in the robot system may be obtained from a vision device equipped on one of the links. Based on the orientations and the at least one image, a first mapping may be determined between a vision coordinate system of the vision device and a link coordination system of the link. Further, embodiments of present disclosure provide apparatuses, systems, and computer readable media for managing a robot system. The vision device may be calibrated by the first mapping and may be used to manage operations of the robot system.
Claims
1. A method for managing a robot system, comprising: obtaining orientations for links in the robot system when the links are arranged in a first posture, each of the orientations indicating a direction pointed by one of the links; obtaining, from a vision device equipped on one of the links, at least one image of an object placed in the robot system; and determining, based on the orientations and the at least one image, a first mapping between a vision coordinate system of the vision device and a link coordinate system of the link; wherein the first posture includes any posture in the robot system; wherein the first posture is not limited to a fixed position.
2. The method of claim 1, wherein determining the first mapping comprises: determining, based on the orientations, a second mapping between the link coordinate system and a world coordinate system of the robot system; determining, based on the at least one image, a third mapping between the vision coordinate system and the world coordinate system; and determining the first mapping based on a transformation relationship between the second and third mappings.
3. The method of claim 2, wherein determining the second mapping further comprises: determining the second mapping based on positions of the links; wherein the positions of the links include a height and a width of the link.
4. The method of claim 2, wherein determining the third mapping comprises: obtaining a first measurement and a second measurement for a first point and a second point in an image of the at least one image respectively, the first and second points located in a first axis of the vision coordinate system; obtaining a third measurement of a third point in the image, the third point located in a second axis of the vision coordinate system; and determining the third mapping based on the obtained first, second and third measurements.
5. The method of claim 1, further comprising: calibrating the vision device with a calibration board of the vision device before the obtaining the at least one image.
6. The method of claim 1, wherein, obtaining the orientations for the links comprises: obtaining a first group of orientations for the links when the group of links are arranged in the first posture; obtaining a second group of orientations for the links when the group of links are arranged in a second posture; and obtaining the at least one image of the object comprises: obtaining a first image when the links are arranged in the first posture, and obtaining a second image when the links are arranged in the second posture.
7. The method of claim 6, wherein determining the first mapping comprises: constructing a transformation relationship between the first mapping, the first and second groups of orientations and the first and second images, the first mapping being an unknown variable in the transformation relationship; and solving the transformation relationship so as to determine the first mapping.
8. The method of claim 1, further comprising: obtaining, from the vision device, an image of a target object to be processed by the robot system; determining a source coordinate of the target object in the obtained image, the source coordinate represented in the vision coordinate system; determining a destination coordinate of the target object based on the source coordinate and the first mappings, the destination coordinate represented in a world coordinate system; and processing the target object based on the destination coordinate.
9. An apparatus for managing a robot system, comprising: an orientation obtaining unit configured to obtain orientations for links in the robot system when the links are arranged in a first posture, each of the orientations indicating a direction pointed by one of the links; an image obtaining unit configured to obtain, from a vision device equipped on one of the links, at least one image of an object placed in the robot system; and a determining unit configured to determine, based on the orientations and the at least one image, a first mapping between a vision coordinate system of the vision device and a link coordination system of the link; wherein the first posture includes any posture in the robot system; wherein the first posture is not limited to a fixed position.
10. The apparatus of claim 9, wherein the determining unit comprises: a second mapping determining unit configured to determine, based on the orientations, a second mapping between the link coordinate system and a world coordinate system of the robot system; a third mapping determining unit configured to determine, based on the at least one image, a third mapping between the vision coordinate system and the world coordinate system; and a first mapping determining unit configured to determine the first mapping based on a transformation relationship between the second and third mappings.
11. The apparatus of claim 10, wherein the second mapping determining unit is further configured to: determine the second mapping based on positions of the links; wherein the positions of the links are based on a height and width of the link.
12. The apparatus of claim 10, wherein the third mapping determining unit comprises: a measurement obtaining unit configured to: obtain a first measurement and a second measurement for a first point and a second point in an image of the at least one image respectively, the first and second points located in a first axis of the vision coordinate system; obtain a third measurement of a third point in the image, the third point located in a second axis of the vision coordinate system; and a mapping determining unit configured to determine the third mapping based on the obtained first, second and third measurements.
13. The apparatus of claim 9, further comprising: a calibrating unit configured to calibrate the vision device with a calibration board of the vision device before the obtaining the at least one image.
14. The apparatus of claim 9, wherein, the orientation of obtaining unit is further configured to: obtain a first group of orientations for the links when the group of links are arranged in the first posture, and obtain a second group of orientations for the links when the group of links are arranged in a second posture; and the image obtaining unit is further configured to: obtain a first image when the links are arranged in the first posture, and obtain a second image when the links are arranged in the second posture.
15. The apparatus of claim 14, wherein the determining unit comprises: a constructing unit configured to construct a transformation relationship between the first mapping, the first and second groups of orientations and the first and second images, the first mapping being an unknown variable in the transformation relationship; and a solving unit configured to solve the transformation relationship so as to determine the first mapping.
16. The apparatus of claim 9, wherein, the image obtaining unit further configured to obtain, from the vision device, an image of a target object to be processed by the robot system; and further comprising: a source determining unit configured to determine a source coordinate of the target object in the obtained image, the source coordinate represented in the vision coordinate system; a destination determining unit configured to determine a destination coordinate of the target object based on the source coordinate and the first mappings, the destination coordinate represented in a world coordinate system; and a processing unit configured to process the target object based on the destination coordinate.
17. A system for managing a robot system, comprising: a computer processor coupled to a non-transitory computer-readable memory unit, the memory unit comprising instructions that when executed by the computer processor: obtain orientations for links in the robot system when the links are arranged in a first posture, each of the orientations indicating a direction pointed by one of the links; obtain from a vision device equipped on one of the links, at least one image of an object placed in the robot system; and determine based on the orientations and the at least one image, a first mapping between a vision coordinate system of the vision device and a link coordination system of the link; wherein the first posture includes any posture in the robot system; wherein the first posture is not limited to a fixed position.
18. A non-transitory computer readable medium having instructions stored thereon, the instructions, when executed on at least one processor, cause the at least one processor to: obtain orientations for links in a robot system when the links are arranged in a first posture, each of the orientations indicating a direction pointed by one of the links; obtain from a vision device equipped on one of the links, at least one image of an object placed in the robot system; and determine based on the orientations and the at least one image, a first mapping between a vision coordinate system of the vision device and a link coordination system of the link; wherein the first posture includes any posture in the robot system; wherein the first posture is not limited to a fixed position.
Description
DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9) Throughout the drawings, the same or similar reference symbols are used to indicate the same or similar elements.
DETAILED DESCRIPTION OF EMBODIMENTS
(10) Principles of the present disclosure will now be described with reference to several example embodiments shown in the drawings. Though example embodiments of the present disclosure are illustrated in the drawings, it is to be understood that the embodiments are described only to facilitate those skilled in the art in better understanding and thereby achieving the present disclosure, rather than to limit the scope of the disclosure in any manner.
(11) 1. Environment
(12) For the sake of description, reference will be made to
(13)
(14) In the system as shown in
(15) There have been proposed solutions for manage a robot system equipped with a vision device. According to one solution, a camera may be deployed at a fixed position associated with the robot system. For example, the camera may be arranged at a fixed base. However, a field of view of the camera is limited by the position of the base and the orientation of the camera, and thus images taken by the camera cannot trace movements of the arms of the robot system.
(16) In another solution, the camera may be mounted to the arm of the robot system. However, this technical solution needs to calibrate the camera at a predetermined position at an initial stage of the robot system. The camera is allowed to take images only when the arm is at the predetermined position during the operation. At this time, although the camera is fixed at the arm, the camera can only operate normally when the arm is in the predetermined position, and the field of view of the camera is still limited to the predetermined position and cannot trace movements of the arm.
(17) 2. General Principles
(18) In order to at least partially solve the above and other potential problems, a new method for managing a robot system equipped with a vision device is disclosed according to embodiments of the present disclosure. In general, according to embodiments of the present disclosure, orientations for links in the robot system 110 when the links 112 are arranged in at least one posture are obtained. Each of these orientations indicates a direction pointed by one of the links. Additionally, at least one image of an object 220 placed in the robot system 110 is obtained from a vision device 114 equipped on one of the links 112. Based on the orientations and the at least one image, a first mapping between a vision coordinate system 254 of the vision device 114 and a link coordination system 252 of the link 112 can be determined.
(19) In this way, the position and orientation of the vision device 114 is not limited to a fixed position. Instead, the vision device 114 may be arranged at any position in the robot system 110 (for example, on any of the arms in the robot system 110). Further, it is only required to perform the method of the present invention once before normal operations of the robot system. Once the vision device 114 is calibrated according to the first mapping in the embodiments of the method, the vision device 114 may be used to take images when the arm is moved to any position during the operations of the robot system 110. Therefore, the vision device 114 may move along with the link 112 and trace movements of the link 112.
(20) 3. Example Process
(21) Details of the present invention will be provided with reference to
(22) At block 310, orientations for links 112 in the robot system 110 may be obtained when the links 112 are arranged in at least one posture. Each of the orientations may indicate a direction pointed by one of the links 112. For example, in
(23) Although only two links are depicted in the robot system 110, this is merely for illustration without suggesting any limitations as to the scope of the present disclosure. In other embodiments of the present disclosure, more links may be deployed in the robot system 110. Generally, if there are N links where N is a natural number, then N orientations may be obtained at block 310. At this point, the obtained orientations may be represented by an array of orientations, where the i.sup.th element in the array may indicate the i.sup.th orientation associated with the i.sup.th link.
(24) At block 320, at least one image of the object 220 placed in the robot system 110 may be obtained from the vision device 114 equipped on one of the links 112. Here, the object 220 may be placed within an area reachable by the link 112 in the robot system 110. In some embodiments, the object 220 may be a calibration board which can be used for calibrating the vision device 114.
(25) At block 330, based on the orientations and the at least one image, a mapping (referred to as “first mapping”) between the vision coordinate system 254 of the vision device 114 and the link coordination system 252 of the link 112 is determined.
(26) In some embodiments, the first mapping may be represented by a transformation between the coordinate systems 252 and 254. For example, a transformation matrix T.sub.vision.sup.linkN may be used to represent the first mapping. With the first mapping T.sub.vision.sup.linkN, points in the object 220 which are captured by the vision device 114 in the vision coordinate system 254 can be transformed from the vision coordinate system 254 into the world coordinate system 250.
(27) 4. Determination of First Mapping
(28) In various embodiments, the first mapping T.sub.vision.sup.linkN can be determined in a variety of manners. In some embodiments, the first mapping may be determined from one group of orientations and one image. Alternatively, in other embodiments, the first mapping may be determined from two groups of orientations and two images. Details of these two kinds of way for determining the first mapping will now be discussed.
(29) 4.1. Example Implementation I
(30) In some embodiments, for example, in order to determine the first mapping, the links 112 may be arranged in a first posture. At this point, the orientations of the links 112 the image of the object 220 may be obtained according to the above blocks 310 and 320 respectively, and then the obtained orientations and image may be used to determine the first mapping. In these embodiments, based on the orientations obtained at the block 310, a mapping (referred to as “second mapping” and represented by a transformation matrix T.sub.linkN.sup.world,) between the link coordinate system 252 and the world coordinate system 250 of the robot system 110 may be determined. The second mapping can be used to transform coordinates from the link coordinate system 252 into the world coordinate system 250. Then, based on the image obtained at block 320, a mapping (referred to “third mapping” and represented by a transformation matrix T.sub.vision.sup.world between the vision coordinate system 254 and the world coordinate system 250 may be determined. The third mapping can be used to transform coordinates from the vision coordinate system 254 into the world coordinate system 250.
(31) According to a geometry relationship in various portions of the robot system 110, the first mapping can be determined based on the second and third mappings as below:
T.sub.vision.sup.world=T.sub.linkN.sup.world*T.sub.vision.sup.linkN (1)
(32) Compared with a difficulty for determining the first mapping, the second and the third mappings are relatively easily to be determined. With these embodiments, an effective conversion method is provided for determining the first mapping between the vision coordinate system 254 of the vision device 114 and the link coordination system 252 of the link 112.
(33) In some embodiments, the second mapping may be determined based on the orientations and positions of the links 112. As the orientations and positions of the links 112 are easily to obtain, these embodiments provide a convenient and effective manner for determining the second mapping between the link coordinate system 252 and the world coordinate system 250. Now example embodiments for determining the second mapping T.sub.linkN.sup.world will be described with reference to
(34) In
(35) Based on a geometry relationship as shown in
(36) In some embodiments, the vision device 114 may be calibrated with a calibration board of the vision device 114 before the obtaining the at least one image. Details will be provided with reference to
(37) With these embodiments, the vision device 114 may be calibrated with the calibration board 510, therefore the further processing in the present disclosure may be performed on an accurate base. Moreover, the subsequent procedure for determining the mappings between various coordinate systems and for controlling the robot system 110 may be implemented in a precious way.
(38) In some embodiments, the third mapping may be determined based on the image obtained at block 320. Now example embodiments for determining the third mapping T.sub.vision.sup.world will be described with reference to
(39) As shown in
(40) With these embodiments, the above three points may be selected from the image taken by the vision device 112. As the coordinates of the three points in the vision coordinate system 254 may be measured from the image, and the coordinates of the three points in the world coordinate system 250 may be read from the robot system 110, the third mapping may be determined in an automatic manner without manual intervention. Based on the second mapping and the third mapping, the first mapping may be determined according to the above Equation (1).
(41) 4.2. Example Implementation II
(42) The above paragraphs have described some embodiments for determining the first mapping based on one group of orientations and one image. Hereinafter, other embodiments for determining the first mapping based on two groups of orientations and two images will be described.
(43) In some embodiments, in order to obtain the orientations for the links, the links 112 may be arranged in a first and a second posture, respectively. When the links are arranged in the first posture, a first group of orientations of the links and a first image of the object 220 may be obtained for the links 112. Further, when the links are arranged in the second posture, a second group of orientations of the links and a second image of the object 220 may be obtained. Here, when the positions of the links 112 are changed from the first posture to the second posture, the position of the object 220 should remain unchanged.
(44) In these embodiments, the first mapping may be determined based on the following Equation (2), which represents transformations between the world coordinate system 250 and the vision coordinate system 254:
F.sub.object.sup.world=T.sub.linkN.sup.world*T.sub.vision.sup.linkN*F.sub.object.sup.vision (2)
(45) In the above Equation (2), F.sub.object.sup.world indicates a point in the object 220 represented by the world coordinate system 250, and F.sub.object.sup.vision indicates the point in the object 220 represented by the vision coordinate system 254. In this embodiment, the first group of orientations may be used to determine T.sub.linkN.sup.world associated with the first posture, and the first image may be used to determine F.sub.object.sup.vision. Therefore, the first mapping T.sub.vision.sup.linkN can be determined accordingly. With these embodiments, a further way is provide for determining the first mapping by placing the links 112 of the robot system 110 into a first posture and a second posture. By collecting the orientations and the images associated with the first posture and the second posture, respectively, the first mapping may be determined in an automatic manner without any manual intervention.
(46) In some embodiments, based on the above Equation (2), a transformation relationship may be constructed between the first mapping, the first and second groups of orientations and the first and second images, where the first mapping being an unknown variable in the transformation relationship. Continuing the above example, values determined from the first and second groups of orientations and the first and second images may replace corresponding variables in Equation (2) to obtain the following Equations (3) and (4):
F.sub.object1.sup.world =T.sub.linkN1.sup.world*T.sub.vision.sup.linkN*F.sub.object1.sup.vision (3)
F.sub.object2.sup.world =T.sub.linkN2.sup.world*T.sub.vision.sup.linkN*F.sub.object2.sup.vision (4)
(47) Then, Equations (3) and (4) may be solved so as to determine the first mapping. In the above Equation (3), the value T.sub.linkN1.sup.world may be determined from the first group of orientations, and the value F.sub.object1.sup.vision may be determined from the first image. Similarly, in the above Equation (4), the value T.sub.linkN2.sup.world may be determined from the second group of orientations, and the value F.sub.object2.sup.vision may be determined from the second image. Further, due to the face that F.sub.object1.sup.world and F.sub.object2.sup.world are constant, Equation (5) may be derived from Equations (3) and (4):
(T.sub.linkN2.sup.world).sup.−1*T.sub.linkN1.sup.world*T.sub.vision.sup.linkN=T.sub.vision.sup.linkN*F.sub.object2.sup.vision*(F.sub.object1.sup.vision).sup.−1 (5)
(48) In the above Equation (5), the first mapping T.sub.vision.sup.linkN is an unknown value while all the other values in Equation (5) are known. By solving the above Equation (5), the first mapping T.sub.vision.sup.linkN may be determined. With these embodiments, the first mapping may be easily determined by solving the transformation relationship which is constructed based on the measurements collected in the robot system in the real time. Here, the collected measurements may correctly reflect the association between the various coordinate systems, therefore the first mapping may be determined in an accurate and automatic manner without manual intervention.
(49) 5. Management of Robot System based on First Mapping
(50) The above paragraphs have described how to determine the first mapping, once the first mapping is determined, the first mapping may be used to facilitate operations of the robot system 110. Specifically, the first mapping may be used to convert the points in a target object that is to be processed by the robot system 110 from the vision coordinate system 254 into the world coordinate system 250. Alternatively, the first mapping may be used to convert the points in command of the controller 120 for the robot system 110 from the world coordinate system 250 into the vision coordinate system 254. Further, the various coordinate systems may be converted into a uniformed one and thus the robot system 110 may be assisted to perform the desired operation.
(51) In some embodiments, an image of a target object to be processed by the robot system 110 may be obtained from the vision device 114. A source coordinate of the target object may be determined from the obtained image, here the source coordinate is represented in the vision coordinate system 254 and it may be directed measured by the vision device 114. Further, based on the source coordinate and the first mappings, a destination coordinate of the target object may be determined. In this example, the destination coordinate is represented in the world coordinate system 250, and it may be determined based on the above Equation (2). Further, the target object may be processed based on the destination coordinate.
(52) In one example, if the tool 210 connected to the link 112, and it is desired to drill a hole at the center of the target object by the tool 210. First, size and position information about the target object may be obtained from an image taken by the vision device 114. The information may be converted from the vision coordinate system 254 into the world coordinate system 250 so as to assist the controller 120 to determine a movement path of the link 114. Then, the movement path may lead the tool 210 to the center of the target object and drill a hole there. With these embodiments, coordinates of the target object may be easily converted between the vision coordinate system 254 and the world coordinate system 250 during the subsequent operation of the robot system 110.
(53) 6. Example Apparatus and System
(54) In some embodiments of the present disclosure, an apparatus 700 for managing a robot system 110 is provided.
(55) In some embodiments, the determining unit 730 comprises: a second mapping determining unit configured to determine, based on the orientations, a second mapping between the link coordinate system 252 and a world coordinate system 250 of the robot system 110; a third mapping determining unit configured to determine, based on the at least one image, a third mapping between the vision coordinate system 254 and the world coordinate system 250; and a first mapping determining unit configured to determine the first mapping based on a transformation relationship between the first, second and third mappings.
(56) In some embodiments, the second mapping determining unit is further configured to: determine the second mapping based on the orientations and positions of the links 112.
(57) In some embodiments, the third determining unit comprises: a measurement obtaining unit configured to: obtain a first measurement and a second measurement for a first point and a second point in an image of the at least one image respectively, the first and second points located in a first axis of the vision coordinate system 254; obtain a third measurement of a third point in the image, the third point located in a second axis of the vision coordinate system 254; and a mapping determining unit configured to determine the third mapping based on the obtained first, second and third measurements.
(58) In some embodiments, the apparatus 700 further comprises a calibrating unit configured to calibrate the vision device 114 with a calibration board of the vision device 114 before the obtaining the at least one image.
(59) In some embodiments, the orientation obtaining unit 710 is further configured to: obtain a first group of orientations for the links 112 when the group of links 112 are arranged in a first posture; and obtain a second group of orientations for the links 112 when the group of links 112 are arranged in a second posture; and the image obtaining unit is further configured to: obtain a first image when the links 112 are arranged in the first posture, and obtain a second image when the links 112 are arranged in the second posture.
(60) In some embodiments, the determining unit 730 comprises: a constructing unit configured to construct a transformation relationship between the first mapping, the first and second groups of orientations and the first and second images, the first mapping being an unknown variable in the transformation relationship; and a solving unit configured to solve the transformation relationship so as to determine the first mapping.
(61) In some embodiments, the image obtaining unit 720 further configured to obtain, from the vision device 114, an image of a target object to be processed by the robot system 110.
(62) In some embodiments, the apparatus 700 further comprises: a source determining unit configured to determine a source coordinate of the target object in the obtained image, the source coordinate represented in the vision coordinate system; a destination determining unit configured to determine a destination coordinate of the target object based on the source coordinate and the first mappings, the destination coordinate represented in the world coordinate system 250; and a processing unit configured to process the target object based on the destination coordinate.
(63) In some embodiments of the present disclosure, a system 800 for managing a robot system is provided.
(64) In some embodiments of the present disclosure, a computer readable medium for managing a robot system is provided. The computer readable medium has instructions stored thereon, and the instructions, when executed on at least one processor, may cause at least one processor to perform the method for managing a robot system as described in the preceding paragraphs, and details will be omitted hereinafter.
(65) Generally, various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
(66) The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer readable storage medium. The computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the process or method as described above with reference to
(67) Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
(68) The above program code may be embodied on a machine readable medium, which may be any tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
(69) Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the present disclosure, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. On the other hand, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination.
(70) Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.