SYSTEMS AND METHODS FOR ANIMATING A SIMULATED FULL LIMB FOR AN AMPUTEE IN VIRTUAL REALITY
20230023609 · 2023-01-26
Inventors
Cpc classification
A61M21/00
HUMAN NECESSITIES
G06F3/011
PHYSICS
A61M2205/3592
HUMAN NECESSITIES
A61M2205/3553
HUMAN NECESSITIES
A61M2205/505
HUMAN NECESSITIES
A61M2205/3569
HUMAN NECESSITIES
A63B71/0622
HUMAN NECESSITIES
International classification
A61M21/00
HUMAN NECESSITIES
A63B71/06
HUMAN NECESSITIES
Abstract
A system and method for generating simulated full limb animations in real time based on sensor and tracking data. A computing environment for receiving and processing tracking data from one or more sensors, for mapping tracking data onto a 3D model having a skeletal hierarchy and a surface topology, and for rendering an avatar for display in virtual reality. A method for animating a full-bodied avatar from tracking data collected from an amputee. A means for determining, predicting, or modulating movements an amputee intends to make with his or her simulated full limb. A modified inverse kinematics method for arbitrarily and artificially overriding a position and orientation of a tracked end effector. Synchronous virtual reality therapeutic activities with predefined movement patterns that may modulate animations.
Claims
1. A method of animating an avatar performing an activity in virtual reality, the method comprising: accessing avatar skeletal data; identifying a missing limb in the first skeletal data; accessing a set of movement rules corresponding to the activity; generating simulated full limb data based on the set of movement rules and the avatar skeletal data; and rendering the avatar skeletal data with the simulated full limb skeletal data.
2. The method of claim 1, wherein the set of movement rules comprises symmetry rules.
3. The method of claim 2, wherein generating the simulated full limb data based on the set of movement rules and the avatar skeletal data further comprises generating the simulated full limb skeletal data based on reflecting position data for a full limb over an axis.
4. The method of claim 1, wherein the set of movement rules comprises predefined position rules.
5. The method of claim 4, wherein the generating simulated full limb data based on the set of movement rules and the avatar skeletal data further comprises generating the simulated full limb skeletal data based on a predefined position for the activity.
6. The method of claim 1, wherein the set of movement rules comprises prop position rules.
7. The method of claim 6, wherein the generating simulated full limb data based on the set of movement rules and the avatar skeletal data further comprises generating the simulated full limb skeletal data based on a relational position for a full limb.
8. The method of claim 1, wherein the avatar skeletal data is based on received position and orientation data for a plurality of body parts.
9. The method of claim 1, wherein the rendering the avatar skeletal data with the simulated full limb data comprises overriding a portion of the avatar skeletal data with the simulated full limb data.
10. The method of claim 1, wherein the accessing the set of movement rules corresponding to the activity comprises determining a movement pattern associated with the activity and accessing the set of movement rules corresponding to the movement pattern.
11.-26. (canceled)
27. A method of providing virtual reality therapy for an amputee, comprising: receiving movement data of an intact limb and an amputated limb; predicting synchronous movements based on the movement data of the intact limb; and generating an avatar for the amputee based on the synchronous movements in place of the amputated limb.
28. The method of claim 27, wherein the predicting the synchronous movements is based on a relation between the intact limb and the amputated limb.
29. The method of claim 28, wherein the relation is a tether, a prop, or a symmetry between the two limbs that allows the position and orientation of one limb to determine the position and orientation of a partner limb.
30. The therapeutic activity of claim 27, wherein generating the avatar comprises a virtual image, a virtual reality image, or an augmented reality image.
31. A method for overriding an end effector for generating an avatar of a user, comprising: collecting position and orientation data for a first limb of the user; generating a virtual prop with a first contact region and a second contact region; determining a position and orientation of the first contact region with the first limb; and solving a position of a second limb based on the second contact region.
32. The method of claim 31, wherein the end effector of the second limb is overridden by the second contact region.
33. The method of claim 31, wherein the virtual prop extends in a direction that is perpendicular to the first limb.
34. The method of claim 31 further comprising assigning the position and orientation data of the first limb, or portion thereof, as an end effector, and solving a position and orientation of the first limb from the end effector.
35. The method of claim 31, wherein each contact region of the virtual prop is animated as hand grip or foot placement position.
36. The method of claim 31, wherein the first contact region and second contact region are connected by a tether that is at least one of the following: rigid, flexible, and stretchable.
37. The method of claim 36, wherein a constraint between both contact regions and the tether permits only an angle of between 0 and 45 degrees to form between the tether and at least one of the following: the first contact region and the second contact region.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
DETAILED DESCRIPTION
[0061]
[0062] The virtual mirror 100 of
[0063] Mirrored data that is duplicative may be used to inform the animations that are rendered. For instance, duplicative mirrored data may be combined with tracked data according to a weighting system and the resulting combination, e.g., mixed data, is used to deform a 3D model that forms the basis of a rendered display. Mixed data results from weighted averages of tracked data and mirrored data for the same body part, adjacent body parts, or some combination thereof. The mixed data may be weighted evenly as 50% tracked data and 50% mirrored data. Alternatively, the weighting can be anywhere between 0-100% for either the tracked data or the mirrored data, with the remaining balance assigned to the other data set. This weighting system remedies issues that could arise if, for example, the tracked position of an elbow of a user's amputated arm did not align with the mirrored data for a forearm sourced from a user's intact partner arm. Rather than display an arm that is disconnected or inappropriately attached, the weighting system generates an intact and properly configured arm that is positioned according to a weighted combination of the tracking data and the mirrored data. This process may be facilitated by a 3D model, onto which tracked data, mirrored data, and mixed data are mapped, that is restricted by a skeletal structure that only allows anatomically correct position and orientations for each limb and body part. Any position and orientation data that would position or orient the 3D model into an anatomically incorrect position may be categorically excluded or blended with other data until an anatomically correct position is achieved.
[0064] The manner in which duplicative data is compiled may vary with the activity a user is performing in virtual reality. During some activities, the VR engine may preferentially render for display one set of duplicative data over the other set rather than using a weighted average. In one example, the VR engine may use an alignment tool to determine how to parse duplicative data. For instance, the VR engine may receive tracking data for a first arm and tracking data for an elbow of a second arm, the virtual mirror may generate mirrored data for an elbow position and orientation of the first arm and mirrored data for a position and orientation of the second arm, and utilize an alignment tool to determine which set of duplicative data is used to render an avatar 101. The alignment tool may come in the form of a prop 106 that is held by two hands. In this example, a user may be physically gripping the prop 106 with their first arm, e.g., tracked arm 102. With this alignment tool, the VR engine may preferentially render an avatar with tracking data for the first arm and mirrored data for the second arm, e.g., virtual simulated full limb 103. The VR engine may disregard tracking data from the elbow of the second arm that would position the second arm such that it could not grip a virtual rendering of the prop 106 and may also disregard mirrored data for the first arm 102 that would do the same. This preferential rendering is especially useful when a user is performing an activity where they contact or grip an object.
[0065] Although previous examples have focused on the generation of mirrored data for limbs and the parsing between duplicative data for two limbs for simplicities sake, it should be understood that the mirror may generate mirrored data for any body part for which tracking data is received. For instance, tracking data for the position and orientation of shoulders, torsos, and hips may be utilized by the virtual mirror 100 to generate mirrored data of those body parts. Alternatively, the virtual mirror 100 may be configured to only establish a symmetry between two specific portions, regions, or sections of a user. The virtual mirror 100 may only generate mirrored data for a specific limb, while not providing mirrored copies of any other body part. For example, the virtual mirror 100 may establish a symmetry between the two limbs, such that the position and orientation of one is always mirrored by its partner's position and orientation, while the remainder of an avatar is positioned from tracking data without the assistance of the virtual mirror 100.
[0066] The nature of the mirrored copies depends on the position and orientation of the virtual mirror 100. In the example illustrated by
[0067] A virtual mirror 100 that translates may translate across a pivot point 107, may translate across one or more axes of movement, or some combination thereof. In one example, the position and orientation of the virtual mirror 100 is controlled by a prop 106. As a user is tracked as moving the prop 106, the virtual mirror 100 moves as if it is attached the virtual mirror 100 at the pivot point 107. The prop 106 may fix the distance between two arms and the prop may fix the virtual mirror 100 at a set distance from the tracked limb that is adhered to the prop. In some embodiments, a prop may not be used, and the position and orientation of the mirror may depend on a tracked limb directly. In one example, a mirror is positioned at a center point, e.g., pivot point 107, that aligns with a midline of an avatar 101. If a limb is tracked as crossing the midline, the mirror may flip and animate a limb as crossing. The height of the pivot point may be at a mean between the heights of a user's limbs. The angle of the tracked limb may determine the relative orientation of a limbs as they cross, e.g., one on top of the other. In some instances, the mirrored data may be repositioned according to the orientation of the tracked limb. For instance, if tracking data for an arm indicates that the thumb is pointing upwards and the arm is crossing the chest, then the mirrored data for a virtual simulated full limb may be positioned such that it is above the tracked arm and shows no overlap. Likewise, if the thumb is pointed down, the mirrored data will be adjusted vertically and the angle adjust accordingly, such that a simulated full limb is positioned beneath the tracked arm. In some instances, the VR engine may not only utilize tracking data to generate mirrored data but may also simply copy one or more features of the tracked limbs position or movement. In such cases, the VR engine may generate parallel data in addition to mirrored data, and an avatar may be rendered according to some combination of tracked data, mirrored data, and parallel data along with anatomical adjustments that prevent unrealistic overlap, position, or orientation.
[0068]
[0069]
[0070]
[0071] Renderings of opposite movements of a tracked limb may be useful for rendering animations for user a performing a synchronized activity or an activity having synchronized control mechanisms. Although the positions are rendered as opposite along the Y-axis 211 in this example, the rotational orientation of the tracked arm 102 may be used to generate a rotational orientation of a virtual simulated full limb that is either mirrored or parallel. For instance, the palms of both arms may be rendered as facing towards the body in a mirrored fashion. Alternatively, the palms of the arms may be pointing in the same direction in a parallel fashion. The manner in which rotational orientation of a tracked limb 102 is used to determine rotational orientation of a virtual simulated full arm 103 may vary from one activity to another.
[0072]
[0073]
[0074] In the examples illustrated by
End Effector Overridden Inverse Kinematics (“EEOIK”)
[0075] In some embodiments, an inverse kinematics method that utilizes an overridden end effector is used to solve a position and orientation of a simulated full limb. An end effector may be overridden by arbitrarily and artificially altering its position and orientation. This may be useful when rendering a full body avatar for a user having an amputated limb or body part. For instance, tracking data corresponding to an end effector of the amputated limb may be overridden by lengthening or extending the end effector to a new position and orientation. The artificially and arbitrarily extended end effector allows the VR engine to render animations for a complete limb from an amputated limb's tracking data.
[0076] A position and orientation of an end effector may be overridden using a linkage, a tether, a bounding box, or some other type of accessed constraint. A linkage, tether, or bounding box may fix two limbs or body parts according to a distance, an angle, or some combination thereof or may constrain two limbs or body parts within the boundaries of a bounding box, whereby the position and orientation of a tracked limb's end effector may determine what position and orientation a virtual simulated full limb's end effector is overridden to. For instance, a linkage or a tether may establish a minimum distance, a maximum distance, or some combination thereof between two limbs. As a tracked limb is tracked as moving relative to a virtual simulated full limb, the minimum and/or maximum distance thresholds may trigger a virtual simulated full limb to follow or be repelled by the tracked limb, whereby the tracked limb's end effector determines the overridden position and orientation of a simulated full limb's end effector. In another example, a linkage or tether establishes one or more catch angles between a tracked limb and a simulated full limb, whereby rotations of the tracked limb are translated into motion of a simulated full limb at the catch angles. In these examples, tracking data indicating movement of the tracked limb may not be translated to the animations of a virtual simulated full limb until the linkage or tether has reached its maximum distance or angle between the two limbs, after which point a simulated full limb may trail behind or be repelled by the movements of the tracked limb. In one example, a user is provided with a set of num-chucks in virtual reality, whereby the chain between the grips establishes a maximum distance between the hand of the tracked limb and the hand of a virtual simulated full limb, an interaction between the chain and the hand grips establishes a maximum angle, and the size of the hand grips establishes a minimum distance. In this example, the movements of the tracked limb are translated to movements of a virtual simulated full limb when any one of these thresholds is met, thereby enabling the position and orientation of the tracked limb's end effector to at least partially determine the overridden position and orientation of a virtual simulated full limb's end effector. A bounding box may establish a field of positions that a virtual simulated full limb can occupy relative to a tracked limb. (the “bounding box” description could benefit from further elucidation).
[0077] A position and orientation of an end effector may be overridden using a physical prop, a virtual prop, or both. A prop may fix the relative position of two end effectors. For instance, a prop may have two grip or contact points, whereby tracking data indicating movements of one grip point or one contact point determines a position and orientation of the second grip or contact point. A prop such as this may beneficially provide the illusion that an amputee is in control of their virtual simulated full limb. For instance, an amputee contacting a first grip or contact point of the prop will be provided with a visual indication of where their amputated limb should be positioned and how it should be orientated, e.g., as gripping or contacting the second grip or contact point. As an amputee instructs their intact limb to move, the prop will move and alter the position and orientation of a virtual simulated full limb. Once an amputee understands how the prop moves the second grip or contact point, they will be able to predict the movement animations the VR engine provides for a virtual simulated full limb based on the movements they make with their intact limb. Once an amputee can predict the corresponding movements, they can instruct their amputated limb to make those same movements and the VR engine will beneficially provide animations of a virtual simulated full limb making those same movements. As such, the prop provides predictable animations for a virtual simulated full limb that allow an amputee to feel a sense of control over their simulated full limb.
[0078] A prop may provide animations for a virtual simulated full limb using a modified inverse kinematics method. The modified inverse kinematics method may utilize a tracked limb with complete tracking data including an end effector, a virtual simulated full limb with incomplete tracking data (e.g., tracking data available only from a remaining portion of a limb, if at all), and a prop having two grip or contact points. The method may assign the tracked end effector as gripping or contacting a first section of the prop. Movements of the tracked end effector may be translated into movements of the prop.
[0079] A second section of the prop may serve as an overridden end effector for the tracked limb's amputated partner. For example, tracking data for an amputated limb's end effector that is communicated to the VR engine may be arbitrarily and artificially overridden such that the end effector is reassigned to the second section of the prop. The position and orientation of a virtual simulated full limb may then be solved using the second section of the prop as an end effector, while the position and orientation of the tracked limb may be solved using the end effector indicated by the tracking data. This allows an intact limb to effectively control the position of an animated virtual simulated full limb by manipulating the position of the prop and thereby provides a sense of volition over the animated virtual simulated full limb that can help alleviate phantom limb pain. A modified inverse kinematics method such as this may be referred to as an end effector override inverse kinematics (“EEOIK”) method. In one example, the VR engine receives tracking data indicating that a tracked limb is contacting a first contact point of an object and the VR engine then extends the end effector of the simulated full limb using the EEIOK method such that is artificially extends to a second contact point on the object. The tracking data may then directly drive animations for both the tracked arm and the prop, and the tracking data may indirectly drive the animations of a virtual simulated full limb through the prop.
[0080]
[0081]
[0082]
[0083] In an example illustrated by
[0084] Also in an example illustrated by
Synchronous Activities
[0085] The modified inverse kinematics method of the present disclosure may be customized for specific types of activity. Activities may require symmetrical movement, relational movement, aligned movement, tethered movement, patterned movements, item gripping, item manipulation, or specific limb placement. Each type of activity may utilize a different inverse kinematics method to animate a virtual simulated full limb that moves in a predictable and seemingly controlled manner to perform a given activity for rehabilitation. The efficacy of a particular method may vary from activity to activity. In some instances, multiple methods may be weighted and balanced to determine virtual simulated full limb animations.
[0086] Humans are adept at moving a single limb carefully and deliberately while its partner limb remains stationary. However, it is often difficult to move two partner limbs, e.g., two arms, two hands, two feet, two legs, etc., without some form of synchronization. This is one reason why it is often comically difficult to rub one's belly and pat one's head simultaneously. A specific type of synchronization each limb moves with may depend on the activity being performed. When someone kicks a soccer ball, one foot plants itself for balance and the other kicks the ball, when someone shoots a basketball two hands work in sync, when someone rides a bike, flies a kite, paddles a kayak, claps, sutures, knits, or even dances their limbs move in synchronization. Often, the movements of one partner limb can determine the corresponding movement required by the other partner limb, and at the very least, partner limbs can inform what movements the other limb ought to make.
[0087] The modified inverse kinematics solution disclosed herein may utilize information about the activity being performed, e.g., what kind of symmetry frequently occurs or is required to occur, to assist in positioning a virtual simulated full limb. In some instance, the type of symmetry may fix animations such that the tracked limb determines the movement of a virtual simulated full limb. Alternatively, the type of symmetry may only influence or inform the animations that are provided for a virtual simulated full limb. In some embodiments, each activity may feature a predefined movement pattern, whereby the animations provided for a user may be modulated by the predefined movement pattern. For example, tracking data that traverses near the predefined movement pattern may be partially adjusted to more closely align with the trajectory of the predefined movement pattern or the tracking data may be completely overridden to the trajectory of the predefined movement pattern. This may be useful for increasing the confidence of a user and may also help nudge them towards consistently making the desired synchronous movements.
[0088]
[0089]
[0090]
[0091] When the VR engine receives tracking data indicating that the tracked arm is moving away from the body's midline or towards the body's midline, a simulated full arm is animated as moving in the same direction such that the accordion is stretched and compressed. This type of movement may traverse a linear axis 802. This type of rule base symmetry is similar to the type of animations that would be animated with a virtual mirror at a user's midline, whereby an arm moving towards the mirror generates mirrored data of a virtual simulated full limb moving towards the mirror and vice versa. In addition to this linear axis 802, a user may move the accordion along a curved axis 803. For instance, if the VR engine receives tracking data indicating that the tracked limb is moving down and the thumb is rotating from an up position to an out position, then a mirrored copy of this movement may be animated for a simulated full limb, such that the accordion traverse a curved axis 803 such as illustrated in
[0092]
[0093] Some embodiments may utilize a VR engine to perform one or more parts of process 250, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device. For instance, VR engine may be incorporated in one or more of head-mounted display 201 and clinician tablet 210 of
[0094] At input 252, headset sensor data may be captured and input into, e.g., a VR engine. Headset sensor data may be captured, for instance, by a sensor on the HMD, such as sensor 202A on HMD 201 as depicted in
[0095] At input 254, body sensor data—e.g., hand, arm, back, legs, ankles, feet, pelvis and other sensor data—may be captured and input in the VR engine. Hand and arm sensor data may be captured, for instance, by sensors affixed to a patient's hands and arms, such as sensors 202 as depicted in
[0096] At input 256, data from sensors placed on prosthetics and end effectors may be captured and input into the VR engine. Generally, sensors placed on prosthetics and end effectors may be the same as sensors affixed to a patient's body parts, such as sensors 202 and 202B as depicted in
[0097] With each of inputs 252, 254, and 256, data from sensors may be input into the VR engine. Sensor data may comprise location and rotation data in relation to a central sensor such as a sensor on the HMD or a sensor on the back in between the shoulder blades. For instance, each sensor may measure a three-dimensional location and measure rotations around three axes. Each sensor may transmit data at a predetermined frequency, such as 60 Hz or 200 Hz.
[0098] At step 260, the VR engine determines position and orientation (P&O) data from sensor data. For instance, data may include a location in the form of three-dimensional coordinates and rotational measures around each of the three axes. The VR engine may produce virtual world coordinates from these sensor data to eventually generate skeletal data for an avatar. In some embodiments, sensors may feed the VR engine raw sensor data. In some embodiments, sensors may input filtered sensor data into sensor engine 620. For instance, the sensors may process sensor data to reduce transmission size. In some embodiments, sensor 202 may pre-filter or clean “jitter” from raw sensor data prior to transmission. In some embodiments, sensor 202 may capture data at a high frequency (e.g., 200 Hz) and transmit a subset of that data, e.g., transmitting captured data at a lower frequency. In some embodiments, VR engine may filter sensor data initially and/or further.
[0099] At step 262, the VR engine generates avatar skeletal data from the determined P&O data. Generally, a solver employs inverse kinematics (IK) and a series of local offsets to constrain the skeleton of the avatar to the position and orientation of the sensors. The skeleton then deforms a polygonal mesh to approximate the movement of the sensors. An avatar includes virtual bones and comprises an internal anatomical structure that facilitates the formation of limbs and other body parts. Skeletal hierarchies of these virtual bones may form a directed acyclic graph (DAG) structure. Bones may have multiple children, but only a single parent, forming a tree structure. Two bones may move relative to one another by sharing a common parent.
[0100] At step 264, the VR engine identifies the missing limb, e.g., the amputated limb that will be rendered as a virtual simulated full limb. In some embodiments, identifying the missing limb may be performed prior to generating avatar skeletal data or even receiving data. For instance, a therapist (or patient) may identify a missing limb in a profile or settings prior to therapy or VR games and activities, e.g., when using an “amputee mode” of the VR application. In some embodiments, identifying the missing limb may be performed by analyzing skeletal data to identify missing sensors or unconventionally positioned sensors. In some embodiments, identifying the missing limb may be performed by analyzing skeletal movement data to identify unconventional movements.
[0101] At step 266, the VR engine determines which activity (e.g., game, task, etc.) is being performed and determines a corresponding movement pattern. An activity may require, e.g., synchronized movements, symmetrical movement, relational movement, aligned movement, tethered movement, patterned movements, item gripping, item manipulation, and/or or specific limb placement. For instance, if the activity is a virtual mirror, like activity depicted in
[0102] At step 270, the VR engine determines what rules the activity's movement pattern requires. Some synchronized movements and/or symmetrical movements may require symmetry rules. For example, generating simulated full limb movements with a virtual mirror, e.g., depicted in
[0103] If the VR engine determines that the activity's movement pattern requires symmetry at step 270, the VR engine accesses symmetry rules for a simulated full limb at step 272. Symmetry rules may describe rules to generate position and orientation data for a simulated full limb in terms of symmetrical movement of an opposite (full) limb. For example, the VR engine may determine that symmetry rules may be required to generate simulated full limb movements for activity like a virtual mirror, e.g., depicted in
[0104] At step 273, the VR engine determines simulated full limb data based on symmetry rules. For example, the VR engine may generate simulated full limb movements for activity like a virtual mirror, e.g., depicted in
[0105] If the VR engine determines that the activity's movement pattern requires a predefined position at step 270, the VR engine accesses predefined position rules for a simulated full limb at step 274. For example, the VR engine may determine that predefined position rules may be required to generate simulated full limb movements for, e.g., a steering wheel activity (depicted in
[0106] At step 275, the VR engine determines simulated full limb data based on predefined position rules. For example, the VR engine may generate simulated full limb movements for activity like a turning a steering wheel, e.g., depicted in
[0107] If the VR engine determines that the activity's movement pattern requires a prop at step 270, the VR engine accesses prop position rules for a simulated full limb at step 276. For instance, prop position rules may be required to generate simulated full limb movements for activities like swinging a baseball bat (depicted in
[0108] At step 277, the VR engine determines simulated full limb data based on position of the prop rules. For example, the VR engine may generate simulated full limb movements for activity like a turning a swinging a baseball bat, e.g., depicted in
[0109] At step 280, after performance of step 273, 275, and/or 277, the VR engine overrides avatar skeletal data with simulated full limb data. In some embodiments, the VR engine may generate P&O data for a virtual simulated full limb based on the rule, which may be converted to skeletal data. For instance, simulated full limb position and orientation may be substituted for a body part with improper, abnormal, limited, or no sensor data or tracking data. For instance, using a symmetry rule, translated and adjusted left arm data may supplant right arm data. For example, using a predefined position rule, known position and orientation data for a left hand may supplant the received left hand P&O data. For instance, using a prop position rule, position and orientation data for an amputated left arm determined based on relation to P&O data for a full right arm, may supplant the received left arm P&O data. In some embodiments, the VR engine may generate skeletal data based on the rule and not generate P&O data for a simulated full limb. In some embodiments, the VR engine may generate skeletal data for a simulated full limb based on kinematics and/or inverse kinematics.
[0110] At step 282, the VR engine renders an avatar, with a simulated full limb, based on overridden skeletal data. For example, the VR engine may render and animate an avatar using both arms to kayak, or both legs to bicycle, or both hands to steer a car.
[0111]
[0112] Clinician tablet 210 may be configured to use a touch screen, a power/lock button that turns the component on or off, and a charger/accessory port, e.g., USB-C. For instance, pressing the power button on clinician tablet 210 may power on the tablet or restart the tablet. Once clinician tablet 210 is powered on, a therapist or supervisor may access a user interface and be able to log in; add or select a patient; initialize and sync sensors; select, start, modify, or end a therapy session; view data; and/or log out.
[0113] Headset 201 may comprise a power button that turns the component on or off, as well as a charger/accessory port, e.g., USB-C. Headset 201 may also provide visual feedback of virtual reality applications in concert with the clinician tablet and the small and large sensors.
[0114] Charging headset 201 may be performed by plugging a headset power cord into the storage dock or an outlet. To turn on headset 201 or restart headset 201, the power button may be pressed. A power button may be on top of the headset. Some embodiments may include a headset controller used to access system settings. For instance, a headset controller may be used only in certain troubleshooting and administrative tasks and not necessarily during patient therapy. Buttons on the controller may be used to control power, connect to headset 201, access settings, or control volume.
[0115] The large sensor 202B and small sensors 202 are equipped with mechanical and electrical components that measure position and orientation in physical space and then translate that information to construct a virtual environment. Sensors 202 are turned off and charged when placed in the charging station. Sensors 202 turn on and attempt to sync when removed from the charging station. The sensor charger acts as a dock to store and charge the sensors. In some embodiments, sensors may be placed in sensor bands on a patient. Sensor bands 205, as depicted in
[0116] As shown in illustrative
[0117] HMD 201 is a piece central to immersing a patient in a virtual world in terms of presentation and movement. A headset may allow, for instance, a wide field of view (e.g., 110°) and tracking along six degrees of freedom. HMD 201 may include cameras, accelerometers, gyroscopes, and proximity sensors. VR headsets typically include a processor, usually in the form of a system on a chip (SoC), and memory. In some embodiments, headsets may also use, for example, additional cameras as safety features to help users avoid real-world obstacles. HMD 201 may comprise more than one connectivity option in order to communicate with the therapist's tablet. For instance, an HMD 201 may use an SoC that features WiFi and Bluetooth connectivity, in addition to an available USB connection (e.g., USB Type-C). The USB-C connection may also be used to charge the built-in rechargeable battery for the headset.
[0118] A supervisor, such as a health care provider or therapist, may use a tablet, e.g., tablet 210 depicted in
[0119] In some embodiments, such as depicted in
[0120] A wireless transmitter module (WTM) 202B may be worn on a sensor band 205B that is laid over the patient's shoulders. WTM 202B sits between the patient's shoulder blades on their back. Wireless sensor modules 202 (e.g., sensors or WSMs) are worn just above each elbow, strapped to the back of each hand, and on a pelvis band that positions a sensor adjacent to the patient's sacrum on their back. In some embodiments, each WSM communicates its position and orientation in real-time with an HMD Accessory located on the HMD. Each sensor 202 may learn its relative position and orientation to the WTM, e.g., via calibration.
[0121] The HMD accessory may include a sensor 202A that may allow it to learn its position relative to WTM 202B, which then allows the HMD to know where in physical space all the WSMs and WTM are located. In some embodiments, each sensor 202 communicates independently with the HMD accessory which then transmits its data to HMD 201, e.g., via a USB-C connection. In some embodiments, each sensor 202 communicates its position and orientation in real-time with WTM 202B, which is in wireless communication with HMD 201.
[0122] A VR environment rendering engine on HMD 201 (sometimes referred to herein as a “VR application”), such as the Unreal Engine™, uses the position and orientation data to create an avatar that mimics the patient's movement.
[0123] A patient or player may “become” their avatar when they log in to a virtual reality game. When the player moves their body, they see their avatar move accordingly. Sensors in the headset may allow the patient to move the avatar's head, e.g., even before body sensors are placed on the patient. A system that achieves consistent high-quality tracking facilitates the patient's movements to be accurately mapped onto an avatar.
[0124] Sensors 202 may be placed on the body, e.g., of a patient by a therapist, in particular locations to sense and/or translate body movements. The VR engine can use measurements of position and orientation of sensors placed in key places to determine movement of body parts in the real world and translate such movement to the virtual world. In some embodiments, a VR system may collect data for therapeutic analysis of a patient's movements and range of motion.
[0125] In some embodiments, systems and methods of the present disclosure may use electromagnetic tracking, optical tracking, infrared tracking, accelerometers, magnetometers, gyroscopes, myoelectric tracking, other tracking techniques, or a combination of one or more of such tracking methods. The tracking systems may be parts of a computing system as disclosed herein. The tracking tools may exist on one or more circuit boards within the VR system (see
[0126]
[0127] Sensors 202 may be attached to body parts via band 205. In some embodiments, a therapist attaches sensors 202 to proper areas of a patient's body. For example, a patient may not be physically able to attach band 205 to herself. In some embodiments, each patient may have her own set of bands 205 to minimize hygiene issues. In some embodiments, a therapist may bring a portable case to a patient's room or home for therapy. The sensors may include contact ports for charging each sensor's battery while storing and transporting in the container, such as the container depicted in
[0128] As illustrated in
[0129] Once sensors 202 are placed in bands 205, each band may be placed on a body part, e.g., according to
[0130] Each of sensors 202 may be placed at any of the suitable locations, e.g., as depicted in
[0131] Generally, sensor assignment may be based on the position of each sensor 202. Sometimes, such as cases where patients have varying height discrepancies, assigning a sensor merely based on height is not practical. In some embodiments, sensor assignment may be based on relative position to, e.g., wireless transmitter module 202B.
[0132]
[0133] The arrangement shown in
[0134] One or more system management controllers, such as system management controller 912 or system management controller 932, may provide data transmission management functions between the buses and the components they integrate. For instance, system management controller 912 provides data transmission management functions between bus 914 and sensors 902. System management controller 932 provides data transmission management functions between bus 934 and GPU 920. Such management controllers may facilitate the arrangements orchestration of these components that may each utilize separate instructions within defined time frames to execute applications. Network interface 980 may include an ethernet connection or a component that forms a wireless connection, e.g., 802.11b, g, a, or n connection (WiFi), to a local area network (LAN) 987, wide area network (WAN) 983, intranet 985, or internet 981. Network controller 982 provides data transmission management functions between bus 984 and network interface 980.
[0135] Processor(s) 960 and GPU 920 may execute a number of instructions, such as machine-readable instructions. The instructions may include instructions for receiving, storing, processing, and transmitting tracking data from various sources, such as electromagnetic (EM) sensors 903, optical sensors 904, infrared (IR) sensors 907, inertial measurement units (IMUs) sensors 905, and/or myoelectric sensors 906. The tracking data may be communicated to processor(s) 960 by either a wired or wireless communication link, e.g., transmitter 910. Upon receiving tracking data, processor(s) 960 may execute an instruction to permanently or temporarily store the tracking data in memory 962 such as, e.g., random access memory (RAM), read only memory (ROM), cache, flash memory, hard disk, or other suitable storage component. Memory may be a separate component, such as memory 968, in communication with processor(s) 960 or may be integrated into processor(s) 960, such as memory 962, as depicted.
[0136] Processor(s) 960 may also execute instructions for constructing an instance of virtual space. The instance may be hosted on an external server and may persist and undergo changes even when a participant is not logged in to said instance. In some embodiments, the instance may be participant-specific, and the data required to construct it may be stored locally. In such an embodiment, new instance data may be distributed as updates that users download from an external source into local memory. In some exemplary embodiments, the instance of virtual space may include a virtual volume of space, a virtual topography (e.g., ground, mountains, lakes), virtual objects, and virtual characters (e.g., non-player characters “NPCs”). The instance may be constructed and/or rendered in 2D or 3D. The rendering may offer the viewer a first-person or third-person perspective. A first-person perspective may include displaying the virtual world from the eyes of the avatar and allowing the patient to view body movements from the avatar's perspective. A third-person perspective may include displaying the virtual world from, for example, behind the avatar to allow someone to view body movements from a different perspective. The instance may include properties of physics, such as gravity, magnetism, mass, force, velocity, and acceleration, which cause the virtual objects in the virtual space to behave in a manner at least visually similar to the behaviors of real objects in real space.
[0137] Processor(s) 960 may execute a program (e.g., the Unreal Engine or VR applications discussed above) for analyzing and modeling tracking data. For instance, processor(s) 960 may execute a program that analyzes the tracking data it receives according to algorithms described above, along with other related pertinent mathematical formulas. Such a program may incorporate a graphics processing unit (GPU) 920 that is capable of translating tracking data into 3D models. GPU 920 may utilize shader engine 928, vertex animation 924, and linear blend skinning algorithms. In some instances, processor(s) 960 or a CPU may at least partially assist the GPU in making such calculations. This allows GPU 920 to dedicate more resources to the task of converting 3D scene data to the projected render buffer. GPU 920 may refine the 3D model by using one or more algorithms, such as an algorithm learned on biomechanical movements, a cascading algorithm that converges on a solution by parsing and incrementally considering several sources of tracking data, an inverse kinematics (IK) engine 930, a proportionality algorithm, and other algorithms related to data processing and animation techniques. After GPU 920 constructs a suitable 3D model, processor(s) 960 executes a program to transmit data for the 3D model to another component of the computing environment (or to a peripheral component in communication with the computing environment) that is capable of displaying the model, such as display 950.
[0138] In some embodiments, GPU 920 transfers the 3D model to a video encoder or a video codec 940 via a bus, which then transfers information representative of the 3D model to a suitable display 950. The 3D model may be representative of a virtual entity that can be displayed in an instance of virtual space, e.g., an avatar. The virtual entity is capable of interacting with the virtual topography, virtual objects, and virtual characters within virtual space. The virtual entity is controlled by a user's movements, as interpreted by sensors 902 communicating with the VR engine. Display 950 may display a Patient View. The patient's real-world movements are reflected by the avatar in the virtual world. The virtual world may be viewed in the headset in 3D and monitored on the tablet in two dimensions. In some embodiments, the VR world is a game that provides feedback and rewards based on the patient's ability to complete activities. Data from the in-world avatar is transmitted from the HMD to the tablet to the cloud, where it is stored for later analysis. An illustrative architectural diagram of such elements in accordance with some embodiments is depicted in
[0139] A VR system may also comprise display 970, which is connected to the computing environment via transmitter 972. Display 970 may be a component of a clinician tablet. For instance, a supervisor or operator, such as a therapist, may securely log in to a clinician tablet, coupled to the VR engine, to observe and direct the patient to participate in various activities and adjust the parameters of the activities to best suit the patient's ability level. Display 970 may depict at least one of a Spectator View, Live Avatar View, or Dual Perspective View.
[0140] In some embodiments, HMD 201 may be the same as or similar to HMD 1010 in
[0141] The operator device, clinician tablet 1020, runs a native application (e.g., Android application 1025) that allows an operator such as a therapist to control a patient's experience. Cloud server 1050 includes a combination of software that manages authentication, data storage and retrieval, and hosts the user interface, which runs on the tablet. This can be accessed by tablet 1020. Tablet 1020 has several modules.
[0142] As depicted in
[0143] The second part is an application, e.g., Android Application 1025, configured to allow an operator to control the software of HMD 1010. In some embodiments, the application may be a native application. A native application, in turn, may comprise two parts, e.g., (1) socket host 1026 configured to receive native socket communications from the HMD and translate that content into web sockets, e.g., web sockets 1027, that a web browser can easily interpret; and (2) a web browser 1028, which is what the operator sees on the tablet screen. The web browser may receive data from the HMD via the socket host 1026, which translates the HMD's native socket communication 1018 into web sockets 1027, and it may receive UI/UX information from a file server 1052 in cloud 1050. Tablet 1020 comprises web browser 1028, which may incorporate a real-time 3D engine, such as Babylon.js, using a JavaScript library for displaying 3D graphics in web browser 1028 via HTML5. For instance, a real-time 3D engine, such as Babylon.js, may render 3D graphics, e.g., in web browser 1028 on clinician tablet 1020, based on received skeletal data from an avatar solver in the Unreal Engine 1016 stored and executed on HMD 1010. In some embodiments, rather than Android Application 1026, there may be a web application or other software to communicate with file server 1052 in cloud 1050. In some instances, an application of Tablet 1020 may use, e.g., Web Real-Time Communication (WebRTC) to facilitate peer-to-peer communication without plugins, native apps, and/or web sockets.
[0144] The cloud software, e.g., cloud 1050, has several different, interconnected parts configured to communicate with the tablet software: authorization and API server 1062, GraphQL server 1064, and file server (static web host) 1052.
[0145] In some embodiments, authorization and API server 1062 may be used as a gatekeeper. For example, when an operator attempts to log in to the VR engine, the tablet communicates with the authorization server. This server ensures that interactions (e.g., queries, updates, etc.) are authorized based on session variables such as operator's role, the health care organization, and the current patient. This server, or group of servers, communicates with several parts of the VR engine: (a) a key value store 1054, which is a clustered session cache that stores and allows quick retrieval of session variables; (b) a GraphQL server 1064, as discussed below, which is used to access the back-end database in order to populate the key value store, and also for some calls to the application programming interface (API); (c) an identity server 1056 for handling the user login process; and (d) a secrets manager 1058 for injecting service passwords (relational database, identity database, identity server, key value store) into the environment in lieu of hard coding.
[0146] When the tablet requests data, it will communicate with the GraphQL server 1064, which will, in turn, communicate with several parts: (1) the authorization and API server 1062; (2) the secrets manager 1058, and (3) a relational database 1053 storing data for the VR engine. Data stored by the relational database 1053 may include, for instance, profile data, session data, game data, and motion data.
[0147] In some embodiments, profile data may include information used to identify the patient, such as a name or an alias. Session data may comprise information about the patient's previous sessions, as well as, for example, a “free text” field into which the therapist can input unrestricted text, and a log 1055 of the patient's previous activity. Logs 1055 are typically used for session data and may include, for example, total activity time, e.g., how long the patient was actively engaged with individual activities; activity summary, e.g., a list of which activities the patient performed, and how long they engaged with each on; and settings and results for each activity. Game data may incorporate information about the patient's progression through the game content of the VR world. Motion data may include specific range-of-motion (ROM) data that may be saved about the patient's movement over the course of each activity and session, so that therapists can compare session data to previous sessions' data. In some embodiments, file server 1052 may serve the tablet software's website as a static web host.
[0148] While the foregoing discussion describes exemplary embodiments of the present invention, one skilled in the art will recognize from such discussion, the accompanying drawings, and the claims, that various modifications can be made without departing from the spirit and scope of the invention. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope and spirit of the invention should be measured solely by reference to the claims that follow.