Eye movement in response to visual stimuli for assessment of ophthalmic and neurological conditions
11684256 · 2023-06-27
Inventors
- Wolfgang Fink (Montrose, CA, US)
- John Cerwin (Gurnee, IL, US)
- Christopher P Adams (Somerville, MA, US)
Cpc classification
A61B3/024
HUMAN NECESSITIES
A61B5/7282
HUMAN NECESSITIES
A61B3/0025
HUMAN NECESSITIES
A61B5/165
HUMAN NECESSITIES
A61B5/4094
HUMAN NECESSITIES
A61B3/032
HUMAN NECESSITIES
A61B5/4088
HUMAN NECESSITIES
A61B5/6803
HUMAN NECESSITIES
A61B5/4845
HUMAN NECESSITIES
A61B3/12
HUMAN NECESSITIES
International classification
A61B3/024
HUMAN NECESSITIES
A61B3/08
HUMAN NECESSITIES
A61B3/117
HUMAN NECESSITIES
A61B3/12
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
A61B5/16
HUMAN NECESSITIES
Abstract
The present invention generally relates to apparatus, software and methods for assessing ocular, ophthalmic, neurological, physiological, psychological and/or behavioral conditions. As disclosed herein, the conditions are assessed using eye-tracking technology that beneficially eliminates the need for a subject to fixate and maintain focus during testing or to produce a secondary (non-optical) physical movement or audible response, i.e., feedback. The subject is only required to look at a series of individual visual stimuli, which is generally an involuntary reaction. The reduced need for cognitive and/or physical involvement of a subject allows the present modalities to achieve greater accuracy, due to reduced human error, and to be used with a wide variety of subjects, including small children, patients with physical disabilities or injuries, patients with diminished mental capacity, elderly patients, animals, etc.
Claims
1. A non-transitory computer-readable medium for assessing at least one of an ocular, ophthalmic, neurological, physiological, psychological or behavioral condition comprising instructions stored on the computer-readable medium that when executed on a processor cause the processor to: instruct a light-emitting device to display a series of individual visual stimuli to a subject, wherein at least one stimulus in the series of individual visual stimuli is placed opportunistically; acquire data from at least one sensor that tracks eye movement of the subject in response to each of the individual visual stimuli; analyze the data indicative of the tracked eye movement; and assess the presence, absence, type and/or extent of the ocular, ophthalmic, neurological, physiological, psychological and/or behavioral condition.
2. The non-transitory computer-readable medium of claim 1, wherein the opportunistic placement is based on the subject's response to at least one prior stimulus, the subject's response to only one stimulus, the subject's response to only the immediately preceding stimulus, or the subject's response to a plurality of prior stimuli.
3. The non-transitory computer-readable medium of claim 1, wherein the instructions cause the processor to execute at least one of a deterministic algorithm, a non-deterministic algorithm, a stochastic algorithm, a machine learning algorithm, a deep learning algorithm or a combination thereof.
4. The non-transitory computer-readable medium of claim 1, wherein the light-emitting device displays the visual stimuli on a parallel plane, a warped plane, an irregular plane, a convex surface, a concave surface, or in 3D space.
5. The non-transitory computer-readable medium of claim 1, wherein the light-emitting device displays the visual stimuli on a physical surface or a generated surface.
6. The non-transitory computer-readable medium of claim 1, wherein the data indicative of eye movement comprises gaze coordinates or gaze coordinates as a function of time.
7. The non-transitory computer-readable medium of claim 1, wherein analyzing the data indicative of eye movement comprises extracting one or more observables selected from the group consisting of visual detection, gaze trajectory, response time, visual acuity, ability to fixate, overshoot/undershoot, saccadic movement, micro-saccadic movement, field of view, quality of the subject's central vision, quality of the subject's peripheral vision, eye coordination, strabismus, color vision, contrast sensitivity, object perception, shape perception, texture perception, flicker frequency perception and combinations thereof.
8. The non-transitory computer-readable medium of claim 1, wherein the condition is selected from the group consisting of retinal defects, optic nerve defects, cortical defects, blindness, color blindness, macular degeneration, concussion, traumatic brain injury (TBI), brain damage, diabetic retinopathy, glaucoma, cataract, epilepsy, post traumatic stress disorder (PTSD), strabismus, lazy eye tendencies, metamorphopsia, saccades, micro-saccades, influence of intoxicants, legal/illegal drugs, biohazards, biological substances, chemical substances, biochemical substances or radiation, general eye coordination, alertness, sleep apnea and combinations thereof.
9. A method of assessing at least one of an ocular, ophthalmic, neurological, physiological, psychological or behavioral condition of a subject using a computing device programmed to execute a plurality of programmatic instructions, comprising: presenting a series of individual visual stimuli to a subject with a light-emitting device coupled to the computing device, wherein at least one stimulus in the series of individual visual stimuli is placed opportunistically; tracking the subject's gaze in response to each of the individual visual stimuli with a sensor configured to acquire eye movement data; analyzing the data indicative of the tracked eye movement; and assessing the presence, absence, type and/or extent of the ocular, ophthalmic, neurological, physiological, psychological and/or behavioral condition.
10. The method of claim 9, wherein the opportunistic placement is based on the subject's response to at least one prior stimulus, the subject's response to only one stimulus, the subject's response to only the immediately preceding stimulus, or the subject's response to a plurality of prior stimuli.
11. The method of claim 9, wherein presenting the series of individual visual stimuli comprises presenting a first stimulus in a first location and presenting a second stimulus in a second location different from the first location or the same as the first location.
12. The method of claim 9, wherein the processor executes at least one of a deterministic algorithm, a non-deterministic algorithm, a stochastic algorithm, a machine learning algorithm, a deep learning algorithm or a combination thereof.
13. The method of claim 9, wherein the light-emitting device displays the visual stimuli on a parallel plane, a warped plane, an irregular plane, a convex surface, a concave surface, or in 3D space.
14. The method of claim 9, wherein the light-emitting device displays the visual stimuli on a physical surface or a generated surface.
15. The method of claim 9, wherein the data indicative of eye movement comprises gaze coordinates or gaze coordinates as a function of time.
16. The method of claim 9, wherein analyzing the data indicative of eye movement comprises extracting one or more observables selected from the group consisting of visual detection, gaze trajectory, response time, visual acuity, ability to fixate, overshoot/undershoot, saccadic movement, micro-saccadic movement, field of view, quality of the subject's central vision, quality of the subject's peripheral vision, eye coordination, strabismus, color vision, contrast sensitivity, object perception, shape perception, texture perception, flicker frequency perception and combinations thereof.
17. The method of claim 9, wherein the condition is selected from the group consisting of retinal defects, optic nerve defects, cortical defects, blindness, color blindness, macular degeneration, concussion, traumatic brain injury (TBI), brain damage, diabetic retinopathy, glaucoma, cataract, epilepsy, post traumatic stress disorder (PTSD), strabismus, lazy eye tendencies, metamorphopsia, saccades, micro-saccades, influence of intoxicants, legal/illegal drugs, biohazards, biological substances, chemical substances, biochemical substances or radiation, general eye coordination, alertness, sleep apnea and combinations thereof.
18. An apparatus for assessing at least one of an ocular, ophthalmic, neurological, physiological, psychological or behavioral condition comprising: a light-emitting device configured for displaying a series of individual visual stimuli to a subject; at least one sensor for tracking eye movement of the subject in response to each of the individual visual stimuli and generating data indicative of the tracked eye movement; and a processor for (i) analyzing the data indicative of the tracked eye movement; (ii) instructing the light-emitting device to display the individual visual stimuli, wherein at least one stimulus in the series of individual visual stimuli is placed opportunistically; and (iii) assessing the presence, absence, type and/or extent of the ocular, ophthalmic, neurological, physiological, psychological and/or behavioral condition.
19. The apparatus of claim 18, wherein the opportunistic placement is based on the subject's response to at least one prior stimulus, the subject's response to only one stimulus, the subject's response to only the immediately preceding stimulus, or the subject's response to a plurality of prior stimuli.
20. The apparatus of claim 18, wherein the apparatus comprises a virtual reality headset, an augmented reality headset or a mixed reality headset.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Illustrative embodiments of the present invention are described in detail below with reference to the attached drawings, wherein:
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) In general, the terms and phrases used herein have their art-recognized meaning, which can be found by reference to standard texts, journal references and contexts known to those skilled in the art. The following definitions are provided to clarify their specific use in the context of this description.
(7) As used herein, “virtual reality (VR)” refers to an immersive computer-simulated fully artificial digital environment, or the computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside and/or gloves fitted with sensors.
(8) As used herein, “augmented reality (AR)” refers to technology that superimposes a computer-generated image on a user's view of the real world, thus providing a composite view.
(9) As used herein, “mixed reality (MR)” refers to a hybrid reality where virtual objects are overlaid upon and anchored to the real world such that the user can interact with the virtual objects.
(10) As used herein, “opportunistic” describes something that is dependent upon an earlier result. Therefore, an “opportunistic visual stimulus” or “an opportunistically placed visual stimulus” is dependent upon a subject's response or non-response to at least one prior visual stimulus. For example, a subsequent visual stimulus that is opportunistically placed may be positioned to confirm a prior result or map the boundaries of an identified weakness or deficiency, e.g., a visual field defect or scotoma.
(11) As used herein, a “series” includes two or more items, and a “series of individual visual stimuli” includes two or more visual stimuli that are individually displayed (i.e., the stimuli in the series are presented one at a time).
(12) As used herein, the terms “eye movement” and “gaze movement” are used interchangeably to refer to a physiological response to a stimulus from a subject's eye(s). In addition, the terms “eye tracking” and “gaze tracking” are used interchangeably to refer to sensor output that is representative of the eye/gaze movement. In some embodiments, the sensor output is converted into “eye coordinates” or “gaze coordinates”, which may be used interchangeably herein.
(13) As used herein, “eccentricity” refers to the location of a light/visual stimulus with respect to a subject's current gaze location.
(14) As used herein, a “deterministic algorithm” is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.
(15) As used herein, a “non-deterministic algorithm” or “stochastic algorithm” or “probabilistic algorithm” is an algorithm which, given a particular input, will in general not produce the same output, with the underlying machine in general not passing through the same sequence of states. Such algorithm usually includes an element of randomness.
(16) As used herein, a “machine learning algorithm” refers to an algorithm that builds or learns a mathematical model based on sample data in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms comprise, but are not limited to: logistic regression algorithms; decision trees; ensemble methods; level-set methods; cognitive maps; generalized linear models; and clustering algorithms.
(17) As used herein, “deep learning” refers to a class of machine learning algorithms that use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. For example, deep learning algorithms comprise, but are not limited to: feedforward networks, multi-layer perceptrons, convolutional networks, recurrent neural networks, extreme learning machines, long/short term memory networks, auto encoders, modular neural networks, Hopfield attractor networks.
(18) “Proximal” and “distal” refer to the relative positions of two or more objects, planes or surfaces. For example, an object that is close in space to a reference point relative to the position of another object is considered proximal to the reference point, whereas an object that is further away in space from a reference point relative to the position of another object is considered distal to the reference point.
(19) The terms “direct and indirect” describe the actions or physical positions of one object relative to another object. For example, an object that “directly” acts upon or touches another object does so without intervention from an intermediary. Contrarily, an object that “indirectly” acts upon or touches another object does so through an intermediary (e.g., a third component).
(20)
(21) Step 116 is a query to determine whether the subject's gaze position end point was sufficiently close to the visual stimulus location. If the answer to query 116 is yes, the method returns to step 108 to determine whether all eccentricities/locations have been tested. If the answer to query 116 is no, meaning that the gaze position end point was not sufficiently close to the visual stimulus location or there was no subject response to the visual stimulus at all, a second query 118 determines whether a scotoma contingency protocol is finished. If the answer to query 118 is yes, the scotoma contingency protocol is finished even though the last gaze position end point was not sufficiently close to the visual stimulus location (e.g., which may occur when a visual defect or scotoma is identified and confirmed), and the test continues with step 108. If the answer to query 118 is no, meaning the scotoma contingency protocol is not finished, a visual stimulus is displayed according to the scotoma contingency protocol (step 120) and eye tracking data associated with the subject's response to the visual stimulus are evaluated (step 114). The scotoma contingency protocol may, for example, retest the exact same eccentricity that caused a negative response to query 116 to confirm the result, retest the same eccentricity that caused the negative response to query 116 with a different color, contrast, shape, brightness, texture, flicker frequency or other characteristic, singularly or in any combination, to determine the cause of the result, test one or more eccentricities near the eccentricity that caused a negative response to query 116 thereby mapping the boundaries of a potential defect (e.g., visual field defect or scotoma), or a combination of the above. The notion of the subject's gaze position end point being sufficiently close to a visual stimulus location can be based, e.g., on a distance measure, such as, but not limited to the Euclidean distance between the subject's gaze position end point and the visual stimulus location, or on an angle measure, such as the visual field angle between the subject's gaze position end point and the visual stimulus location. In some embodiments, sufficiently close end points are less than or equal to 3 degrees of visual field from the visual stimulus location, or less than or equal to 2 degrees of visual field from the visual stimulus location, or less than or equal to 1 degree of visual field from the visual stimulus location, or less than or equal to 0.5 degree of visual field from the visual stimulus location. In an embodiment, a sufficiently close end point may be defined by a Euclidean distance within any of the immediately preceding ranges.
(22) Flowchart 100 illustrates exemplary steps, which may be modified for different test purposes. For example, step 116 may include additional or alternative queries, such as did the subject overshoot/undershoot the visual stimulus location? Was the response time under or over a particular threshold? Has the visual stimulus location been tested a predetermined number of times? Was the eye/gaze trajectory sufficiently linear?
(23)
(24)
(25)
(26) Although
(27) The one or more processors may also operate to support performance of the relevant functionality in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the functions may be performed by a group of computers accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs).
(28) The apparatus, software and methods disclosed herein are further illustrated by the following Example. This Example is for illustrative purposes only and is not intended to limit the invention.
EXAMPLE
(29) This Example illustrates a virtual opportunistic reaction perimetry (VORP) test using a virtual reality head mounted display (VR HMD).
(30) Equipment
(31) A VR HMD (head mounted display) or equivalent with usually two video feeds, i.e., one for the left eye and one for the right eye, is used. The VR HMD is equipped with real-time eye tracking (preferably 60 Hz or more) for the left and right eye, respectively. For example, useful specifications are: infrared eye tracking system×2, tracking accuracy of less than 1 degree, frame rate of 120 fps.
DESCRIPTION
(32) To test the central and peripheral vision of subjects, i.e., perimetry based on campimetry, point-like light stimuli are presented as a series of individual stimuli. As opposed to standard automated perimetry (e.g., using a Zeiss Humphrey Visual Field Analyzer), the subject is NOT required to maintain fixation throughout the entire test exam, nor is the subject required to push a button or provide verbal feedback to acknowledge the perception of a light stimulus.
(33) Overview of VORP Procedure
(34) The video feed for the eye not being tested is turned off or moved to a black or dark state. For the eye being tested, the eye tracking sensor constantly reports the current eye/gaze location. The subject is asked to “chase” perceived light stimuli that are being presented at pseudo-random locations within the field of view of the VR HMD and subject. In other words, given any current eye/gaze location, the governing test program (i.e., code) can calculate the location of a light stimulus for subsequent presentation within the available real estate of the VR HMD (i.e., the video feed for that eye) for an eccentricity/location that has not been covered at all or not sufficiently covered yet in the current test session. If the subject sees the stimulus and moves his eye/gaze to the location of that last stimulus, the governing test program can exploit this to opportunistically present a stimulus at a farther, same, or closer distance/eccentricity from the eye's current location. For example, if the subject focuses on the center of the VR HMD hemi-screen, and the horizontal dimension of the screen is 20 degrees to either side from center of fixation, a light stimulus presented at either 20 degree location would “force” the eye/gaze to go there. Once there, a light stimulus can be presented at the opposite end of the VR HMD hemi-screen, to now yield a 40 degree eccentricity location and visual field testing. That way, over time, a meaningful visual field can be tested. The overall time of the testing procedure is not fixed, and depends on how quickly the governing test program completes a sufficient visual field screening directly dependent on the eye/gaze movement, i.e., the response of the subject.
(35) Detailed VORP Procedure
(36) (1) In an instantiation, generate a rectangular or concentric list/map of eccentricities to be tested. In the case of VORP, e.g., planar polar coordinates, i.e., radius and angle, are an ideal choice.
(37) (2) This list/map can be user-defined or computer-generated or randomly generated or opportunistically generated in a dynamic fashion (i.e., in real time during the test).
(38) (3) If the map/list is user-defined it can be interactively generated, uploaded via an external file, or communicated through another computer module, or any other means for computer system communication known in the art. Initially a simple configuration file can be used. Eventually, however, the map/list could be dynamically generated and communicated through another computer module that sits on board the VR HMD. For starters, this module would be a simple configuration file.
(39) (4) A VR headset with binocular eye-tracking sensors is generally used because it allows for a determination of strabismus, eye-coordination, saccades and micro-saccades.
(40) (5) Monocular testing is recommended, i.e., the eye not being tested should look at a black or dark screen during the entire time the other eye is being tested.
(41) (6) The patient should ideally be dark adapted. Dark adaption could be accomplished, e.g., with the VR HMD or in a darkened room prior to testing.
(42) (7) The patient is allowed to “look around” within the test space, i.e., he does not have to maintain any particular fixation.
(43) (8) Further the patient is asked to “chase” any light stimuli he might see by trying to focus his eye/gaze on the stimuli. The light stimulus, in an instantiation, would be a point of light that appears in a fixed position. In other instantiations, it could be a constantly present point of light that is moving, but this would constitute a different psychophysical test modality (i.e., object tracking).
(44) (9) Then the VORP testing procedure commences as follows: depending on where the patient eye focus is currently with respect to the boundaries of the tested area in the VR headset, one of the eccentricities (or closest approximation thereof) of the generated list/map that is (a) feasible with respect to the test area boundaries and (b) not yet tested for, is stimulated with a light stimulus of fixed, predefined or variable size, brightness, color, texture, and shape for a certain fixed, predefined or variable time. In addition, the stimulus may have a certain fixed, predefined or variable flicker frequency, or it could alternate between two or more colors and/or textures. VORP is an opportunistic testing procedure, meaning that wherever the current eye focus is NOT, is the preferred area to place the next light stimulus to see whether the patient can notice that light stimulus at that eccentricity with respect to where his current focus is. In addition, repeat testing of prior tested eccentricities and/or locations can occur with VORP in order to map out the boundaries and/or extents of defects, e.g., visual field defects or scotomas.
(45) With smart and/or opportunistic placement of light stimuli, VORP is capable of checking most eccentricities on the predefined list/map, or, in addition or alternatively, to test predominantly in areas where the patient has missed previous light stimuli so as to map out the location, shape and extent of potential scotomas (i.e., visual field defects).
(46) (10) The reaction of the patient's eye with respect to a light stimulus is monitored. Stimulus parameters that may be specified in VORP include, but are not limited to, stimulus size/diameter, stimulus brightness, stimulus color, stimulus contrast, stimulus duration, stimulus texture (Gabor filters), stimulus shape, stimulus flicker frequency, presentation speed of light stimuli, i.e., the time between two consecutive light stimuli, hold time of light stimulus, i.e., how long a light stimulus is presented (ON-time) or combinations thereof.
(47) Stimulus size and brightness may help determine the threshold of contrast sensitivity of the retina at the particular location of the light stimulus. Stimulus color may help determine if the patient is more sensitive to certain colors than to others (e.g., green), and/or whether the patient suffers from a particular color blindness. Stimulus duration may help determine overall alertness of the patient (e.g., determining fatigue in sleep apnea patients, equipment or machinery operators, employees working extended shifts) and reaction speed of the patient (e.g., especially in the elderly). This may be used, e.g., by motor vehicle license issuers to determine the fitness of a subject for operating a vehicle (subsurface vehicle, surface vehicle, aerial vehicle or space vehicle) or machinery, or by safety regulators to determine when an employee is too fatigued to continue working. Stimulus texture may be used to determine particular deficiencies, e.g., deficiencies related to P-cells and/or M-cells. Stimulus shape may help identify metamorphopsia, e.g., distortions in the visual field.
(48) The stimulus parameters/characteristics described herein could be supplied by an external configuration or initialization file to be loaded into a VORP apparatus, interactively chosen by the user/operator during VORP test setup, opportunistically chosen by the VORP program throughout the test, or communicated through another computer module, or any other means for computer system communication known in the art. For each exam the chosen stimulus characteristics/parameters used throughout the testing are documented and reported.
(49) The background color may be changed, e.g., to enable a bright yellow background with large blue stimuli to emulate blue-yellow perimetry, also known as short wave automated perimetry (SWAP), for earlier Glaucoma detection.
(50) At least the following observables may be recorded via eye/gaze tracking during the VORP testing procedure: tested eccentricity, direction/path/trajectory of gaze change, velocity of gaze change, overall reaction speed or alertness of a subject, achieved accuracy of focusing on the stimulus if at all, overshooting/undershooting the stimulus location, or meandering around the stimulus location. The direction, path, trajectory and velocity of gaze change may be indicative of brain damage, concussions, and traumatic brain injury (TBI).
(51) (11) Points 9 and 10 continue until the entire generated list/map of eccentricities has been tested.
(52) (12) Retesting of eccentricities can be performed to avoid/reduce false positives and false negatives, and/or for statistical purposes. Retesting of eccentricities can also be performed to further map out and/or delineate defects, e.g., visual field defects or scotomas.
(53) (13) The choice of eccentricities from the generated list/map can be opportunistic, random within the confines of feasibility, or following a controlled protocol. An example of opportunistic eccentricity generation might include more targeted stimulus generation in areas of the visual field were an initial defect is detected early on in the test procedure. With smart/opportunistic placement of light stimuli, VORP is capable of checking most eccentricities on the predefined list/map, or, in addition or alternatively, of testing predominantly in areas where the subject has missed previous light stimuli so as to map out the location, shape and extent of potential scotomas (i.e., visual field defects).
(54) (14) The possibility that a subject cannot see an eccentricity or eccentricities or all eccentricities (e.g., blind, color-blind, or too dim or small or fast of stimuli) is also taken into account and programmatically captured.
(55) (15) Because this is opportunistic reaction perimetry, the overall testing time per eye may vary. For example, the VORP test time may depend on how quickly VORP completes a visual field screening given the subject's eye movement during the test in response to the presented visual stimuli. As such an overall maximum cutoff time for the test may be introduced.
(56) Ocular, ophthalmic, neurological, physiological, psychological or behavioral conditions that may be tested for by VORP include but are not limited to: visual field testing (retinal testing, optic nerve testing, and cortical testing), TBI or other brain damage, concussions, early onset of AMD or diabetic retinopathy, early onset of glaucoma and cataract, epilepsy, strabismus, lazy-eye determination/assessment, metamorphopsia, general eye coordination, how quickly (i.e., speed) does the subject focus on any presented light stimulus (potential insight into TBI, PTSD, potential glaucoma, etc.), alertness, sleep apnea, how coordinated does the subject move his eye towards any presented light stimulus (potential insight into TBI, PTSD, potential glaucoma, etc.), ability to detect multiple defects in a single test (visual field defects and color blindness), visual field defects and reaction speed, saccades and micro-saccades, i.e., eye movement, ability to focus on or track light stimuli, which may be an indication of being under the influence of, e.g., legal or illegal drugs, intoxicants, biohazards (e.g., bacteria and viruses), biological substances, chemical substances, biochemical substances or radiation.
Statements Regarding Incorporation by Reference and Variations
(57) All references cited throughout this application, for example patent documents including issued or granted patents or equivalents; patent application publications; and non-patent literature documents or other source material; are hereby incorporated by reference herein in their entireties, as though individually incorporated by reference, to the extent each reference is at least partially not inconsistent with the disclosure in this application (for example, a reference that is partially inconsistent is incorporated by reference except for the partially inconsistent portion of the reference).
(58) The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof, but it is recognized that various modifications are possible within the scope of the invention claimed. Thus, it should be understood that although the invention has been specifically disclosed by preferred embodiments, exemplary embodiments and optional features, modification and variation of the concepts herein disclosed can be resorted to by those skilled in the art, and that such modifications and variations are considered to be within the scope of this invention as defined by the appended claims. The specific embodiments provided herein are examples of useful embodiments of the invention and it will be apparent to one skilled in the art that the invention can be carried out using a large number of variations of the devices, device components, and method steps set forth in the present description. As will be apparent to one of skill in the art, methods, software and apparatus/devices can include a large number of optional elements and steps. All art-known functional equivalents of materials and methods are intended to be included in this disclosure. Nothing herein is to be construed as an admission that the invention is not entitled to antedate such disclosure by virtue of prior invention.
(59) When a group of substituents is disclosed herein, it is understood that all individual members of that group and all subgroups are disclosed separately. When a Markush group or other grouping is used herein, all individual members of the group and all combinations and subcombinations possible of the group are intended to be individually included in the disclosure.
(60) It must be noted that as used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural reference unless the context clearly dictates otherwise. Thus, for example, reference to “a processor” includes a plurality of such processors and equivalents thereof known to those skilled in the art, and so forth. As well, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably. The expression “of any of claims XX-YY” (wherein XX and YY refer to claim numbers) is intended to provide a multiple dependent claim in the alternative form, and in some embodiments is interchangeable with the expression “as in any one of claims XX-YY.”
(61) Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, the preferred methods and materials are described.
(62) Whenever a range is given in the specification, for example, a range of integers, a temperature range, a time range, a composition range, or concentration range, all intermediate ranges and subranges, as well as all individual values included in the ranges given are intended to be included in the disclosure. As used herein, ranges specifically include the values provided as endpoint values of the range. As used herein, ranges specifically include all the integer values of the range. For example, a range of 1 to 100 specifically includes the end point values of 1 and 100. It will be understood that any subranges or individual values in a range or subrange that are included in the description herein can be excluded from the claims herein.
(63) As used herein, “comprising” is synonymous and can be used interchangeably with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. As used herein, “consisting of” excludes any element, step, or ingredient not specified in the claim element. As used herein, “consisting essentially of” does not exclude materials or steps that do not materially affect the basic and novel characteristics of the claim. In each instance herein any of the terms “comprising”, “consisting essentially of” and “consisting of” can be replaced with either of the other two terms. The invention illustratively described herein suitably can be practiced in the absence of any element or elements, limitation or limitations which is/are not specifically disclosed herein.