REHABILITATION SUPPORT APPARATUS, REHABILITATION SUPPORT SYSTEM, REHABILITATION SUPPORT METHOD, AND REHABILITATION SUPPORT PROGRAM
20210154431 · 2021-05-27
Assignee
Inventors
Cpc classification
G06F3/011
PHYSICS
G01S7/10
PHYSICS
G02B2027/0187
PHYSICS
G09G2370/20
PHYSICS
G06F3/0346
PHYSICS
A61M21/02
HUMAN NECESSITIES
International classification
A61M21/02
HUMAN NECESSITIES
G01S7/10
PHYSICS
Abstract
A rehabilitation support apparatus that provides rehabilitation comfortable for a user includes a detector that detects a direction of a head of a user wearing a head mounted display, a first display controller that generates, in a virtual space, a rehabilitation target object to be visually recognized by the user and displays the target object on the head mounted display in accordance with the direction of the head of the user detected by the detector, and a second display controller that displays, on the head mounted display, a notification image used to notify the user of a position of the target object.
Claims
1. A rehabilitation support apparatus comprising: a detector that detects a direction of a head of a user wearing a head mounted display; a first display controller that generates, in a virtual space, a rehabilitation target object to be visually recognized by the user and displays the target object on the head mounted display in accordance with the direction of the head of the user detected by the detector; and a second display controller that displays, on the head mounted display, a notification, image used to notify the user of a position of the target object.
2. The rehabilitation support apparatus according to claim 1, wherein the notification image is a radar screen image that shows a relative direction of the position of the target object with respect to a reference direction in the virtual space.
3. The rehabilitation support apparatus according to claim 1, wherein the notification image is a radar screen image that shows a relative direction of the position of the target object with respect to the direction of the head of the user.
4. The rehabilitation support apparatus according to claim 1, wherein the second display controller displays the notification image at a center portion of a display screen of the head mounted display independently of the direction of the bead of the user.
5. The rehabilitation support apparatus according to claim 1, wherein the notification image includes a visual field region image representing a visual field of the user and a target position image representing the position of the target object.
6. The rehabilitation support apparatus according to claim 1, wherein the notification image includes a head image representing the direction of the head of the user viewed from above, and the target position image representing the position of the target object.
7. The rehabilitation support apparatus according to claim 1, wherein the notification image includes a block image generated by dividing a periphery of the user into a plurality of blocks.
8. The rehabilitation support apparatus according to claim 1, further comprising an operation unit used by an operator to operate display control by the first display controller and the second display controller, wherein the operation unit displays, for the operator, an operation screen including an image similar to the notification image.
9. The rehabilitation support apparatus according to claim 8, wherein if a predetermined operation is performed for the notification image in the operation screen, the first display controller generates the target object at a position in the virtual space corresponding to the position where the operation is performed.
10. The rehabilitation support apparatus according to claim 8, wherein the operation unit accepts from the operator, a reconstruction instruction to reconstruct the virtual space in accordance with a position of the user, and in accordance with the reconstruction instruction, the first display controller reconstructs the virtual space based on a position of the head mounted display at a point of time of accepting the reconstruction instruction.
11. (canceled)
12. A rehabilitation support method comprising: detecting a direction of a head of a user wearing a head mounted display; generating, in a virtual space, a rehabilitation target object to be visually recognized by the user and displaying the target object on the head mounted display in accordance with the direction of the head of the user detected in the detecting; and displaying, on the head mounted display, a notification image used to notify the user of a position of the target object.
13. A non-transitory computer readable medium storing a rehabilitation support program or causing a computer to execute a method, comprising: detecting a direction t f a head of a user wearing a head mounted display; generating, in a virtual space, a rehabilitation target object to be visually recognized by the user and displaying the target object on the head mounted display in accordance with the direction of the head of the user detected in the detecting; and displaying, on the head mounted display, a notification image used to notify the user of a position of the target object.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
DESCRIPTION OF EXAMPLE EMBODIMENTS
[0049] Example embodiments of the present invention will now be described in detail with reference to the drawings. It should be noted that the relative arrangement of the components, the numerical expressions and numerical values set forth in these example embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
First Example Embodiment
[0050] A rehabilitation support system 100 according to the first example embodiment of the present invention will be described with reference to
[0051] As shown in
[0052] The detector 101 detects the direction of the head of the user 110 who wears a head mounted display 111.
[0053] The display controller 102 generates, in a virtual space, a rehabilitation target object 122 to be visually recognized by the user 110, and displays the target object 122 on the head mounted display 111 in accordance with the direction of the head of the user 110 detected by the detector 101. The display controller 103 displays, on the head mounted display 111, a notification image 131 used to notify the user of the position of the target object 122.
[0054] According to this example embodiment, the notification image is displayed, and the user is caused to predict, in advance based on visual information, a change in inner ear information that should be obtained next, thereby preventing sensory conflict.
Second Example Embodiment
[0055] A rehabilitation supporting system 200 according, to the second example embodiment of the present invention will be described with reference to
[0056] As shown in
[0057] The two base stations 231 and 232 sense the motion of the head mounted display 233 and the motions of the controllers 234 and 235, and send these to the rehabilitation support apparatus 210. The rehabilitation support apparatus 210 performs display control of the head mounted display 233 while evaluating the rehabilitation action of the user 220.
[0058] Note that the head mounted display 233 can be of a non-transmissive type, a video see-through type, or optical see-through type.
[0059] In this example embodiment, as an example of a sensor configured to detect the position of the hand of the user, the controllers 234 and 235 held in the hands of the user 220, and the base stations 231 and 232 have been described. However, the present invention is not limited to this. A camera (including a depth sensor) configured to detect the positions of the hands of the user by image processing, a sensor configured to detect the positions of the hands of the user by a temperature, a wristwatch-type wearable terminal put on an arm of the user, a motion capture device, and the like are also included in the concept of the sensor according to the present invention.
[0060] The rehabilitation support apparatus 210 includes an action detector 211, display controllers 212 and 213, an evaluator 214, an updater 215, a task set database 216, and an operation unit 217.
[0061] The action detector 211 acquires, via the base stations 231 and 232, the positions of the controllers 234 and 235 held in the hands of the user 220, and detects the rehabilitation action of the user 220 based on changes in the positions of the hands of the user 220.
[0062] The display controller 212 generates, in a virtual space, an avatar object 241 that moves in accordance with a detected rehabilitation action and a target object 242 representing the target of the rehabilitation action. The display controller 212 displays, on a display screen 240, the images of the avatar object 241 and the target object 242 in accordance with the direction and position of the head mounted display 233 detected by the action detector 211. The images of the avatar object 241 and the target object 242 are superimposed on a background image 243. Here, the avatar object 241 has the same shape as the controllers 234 and 235. However, the shape is not limited to this. The avatar object 241 moves in the display screen 240 in accordance with the motions of the controllers 234 and 235. As displayed on the avatar object 241, buttons are prepared on the controllers 234 and 235, and the controllers 234 and 235 are configured to perform various kinds of setting operations. The background image 243 includes a horizon 244 and a ground surface image 245.
[0063] The display controller 212 displays the target object 242 while gradually changing the display position and size such that it falls downward from above the user 220. The user 220 moves the controllers 234 and 235, thereby making the avatar object 241 in the screen approach the target object 242. When a sensor object (not shown here) included in the avatar object 241 bits the target object 242, the target object 242 disappears.
[0064] Also, the display controller 213 displays a radar screen image 250 on the display screen 240 of the head mounted display 233. The radar screen image 250 shows the relative direction of the position of the target object 242 (here, initially set to the front direction of the user sitting straight on the chair) with respect to the reference direction in the virtual space.
[0065] The display controller 213 displays the radar screen image 250 at the center (for example, within the range of −50° to 50°) of the display screen 240 of the head mounted display 233 independently of the direction of the head of the user 220.
[0066] The radar screen image 250 includes a head image 251 representing the head of the user viewed from above, a block image 252 generated by dividing the periphery of the head image 251 into a plurality of blocks, and a fan-shaped image 253 as a visual field region image representing the visual field region of the user. A target position image representing the position of the target object is shown by coloring any block of the block image 252. This allows the user 220 to know whether the target object exists on the left side or right side with respect to the direction in which he/she faces. Note that in this example embodiment, the block image 252 is fixed, and the fan-shaped image 253 moves. However, the present invention is not limited to this, and the block image 252 may be moved in accordance with the direction of the head while fixing the fan-shaped image 253 and the head image 251. More specifically, if the head faces left, the block image 252 may rotate rightward,
[0067] The evaluator 214 compares the rehabilitation action detected by the action detector 211 and a target position represented by the target object displayed by the display controller 212, and evaluates the rehabilitation capability of the user 220. More specifically, it is decided, by comparing the positions in the three-dimensional virtual space, whether the target object 242 and the avatar object 241 that moves in correspondence with the rehabilitation action detected by the action detector 211 overlap.
[0068] If these overlap, it is evaluated that one rehabilitation action is cleared, and a point is added. The display controller 212 can make the target object 242 appear at different positions (for example, positions of three stages) in the depth direction. The evaluator 214 gives different points (a high point to a far object, and a low point to a close object).
[0069] The updater 215 updates a target task in accordance with the integrated point. For example, the target task may be updated using a task achieving ratio (target achieving count/task count).
[0070] The task set database 216 stores a plurality of task sets. A task represents one rehabilitation action that the user should make. More specifically, the task set database 216 stores, as information representing one task, information representing the size, position, and speed of a target object that was made to appear and the size of the avatar object at that time. The task set database 216 stores task sets each of which determines the order to provide the plurality of tasks to the user.
[0071] For example, the task sets may be stored as templates for each hospital, to or a history of executed task sets may be stored for each user. The rehabilitation support apparatus 210 may be configured to be communicable with another rehabilitation support apparatus via the Internet. In this case, one task set may be executed by the same user in a plurality of places, or various templates may he shared by a plurality of users in remote sites.
[0072] The operation unit 217 is provided to operate display control in the display controller 212 and the display controller 213. More specifically, the operation unit 217 displays an operation screen 300 for an operator, as shown in
[0073] The user visual field screen 301 shows an image actually displayed on the head mounted display 233. A reference direction 311 in the virtual space is displayed in the user visual field screen 301. As described with reference to
[0074] The various parameter setting, screen 302 is a screen configured to set a plurality of parameters for defining a task. The operation unit 217 accepts input to the various parameter setting screen 302 from an input device (not shown). The input device may be a mouse or a keyboard, or may be a touch panel, and can use any technical component.
[0075] The various parameter setting screen 302 includes an input region 321 that decides the sizes of left and right target objects, an input region 322 that decides the size of the avatar object 241, an input region 323 that decides the falling speed of the target object, and an input region 324 that decides the position of a target object that appears next. The various parameter setting screen 302 also includes a check box 325 that sets whether to accept art operation of a target object appearance position by a hot key.
[0076] The input region 321 can set, on each of the right and left sides, the radius (visual recognition size) of a visual recognition object that makes the target object position easy for the user to see, and the radius (evaluation size) of a target object that reacts with the avatar object 241. That is, in the example shown in
[0077] In the input region 322, the sensor size of the avatar object 241 (the size of the sensor object) can separately be set on the left and right sides. If the sensor size is large, a task is achieved even if the position of a hand largely deviates from the target object. Hence, the difficulty of the rehabilitation action is low. Conversely, if the sensor size is small, it is necessary to correctly move the hand to the center region of the target object (depending on the evaluation size). Hence, the difficulty of the rehabilitation action is high. In the example shown in
[0078] In the input region 323, the speed of the target object, 242 moving in the virtual space can be defined on each of the left and right sides, in this example the speed is set to 70 cm/s.
[0079] That is, in the example shown in
[0080] The input region 324 has the shape of the enlarged radar screen image 250. Since the check box 325 has a check mark, if an operation of clicking or tapping one of a plurality of blocks in the input region 324 is performed, the target object 242 is generated at a position in the virtual space corresponding to the position of the block for which the operation is performed.
[0081] On the other hand, the score display screen 303 includes an enlarged radar screen image 331 and a score list 332. The enlarged radar screen image 331 is obtained by enlarging the radar screen image 250 displayed on the head mounted display 233, and changes in real time in accordance with the motion of the user and the position where the target object 242 appears next.
[0082] The score list 332 shows the total number of tasks at each position, and points representing how many times each task has been achieved. The point may be expressed as a fraction, a percentage, or a combination thereof. After a series of rehabilitation actions defined by one task set, the evaluator 214 derives a rehabilitation evaluation point using the values in the score list 332.
[0083] At this time, the evaluator 214 adds a weight to a point based on the target object appearance position, the accuracy of touch by the avatar object 241, and the continues achievement count. In this example embodiment, the falling speed and the task interval are not included in the materials of weighting. This is based on a fact that the exercise intensity does not always depend on the speed, and also, factors that make the user hurry are excluded from the viewpoint of preventing accidents.
[0084] The pause button 304 is a button used to pause a series of rehabilitation actions. The end button 307 is a button used to end a series of rehabilitation actions.
[0085] The re-center button 305 is a button that accepts, from the operator, a reconstruction instruction to reconstruct the virtual space in accordance with the position of the user 220. When the re-center button 305 is operated, the display controller 212 reconstructs the virtual space in which the position of the head mounted display 233 at the instant is set to the origin, and the direction of the head mounted display 233 at the instant is set to the reference direction.
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092] As shown in
[0093]
[0094] In step S501, as calibration processing, the target of a rehabilitation action is initialized in accordance with the user. More specifically, each patient is first made to do a work within an action enable range as calibration. It is set to the initial value, and the target is initialized in accordance with the user.
[0095] Next, in step S503, a task is started. More specifically, in addition to a method of designating each task in real time by an operator, a task set called a template may be read out from the task set database, and a task request (that is, display of a target object by the display controller 212) may be issued sequentially for a plurality of preset tasks.
[0096] In step S505, it is determined whether the task is cleared. If the task is cleared, a point is recorded in accordance with the degree of clear (perfect or good).
[0097] In step S507, it is determined whether all tasks of a task set are ended, or the operator presses the end button 307.
[0098] If all the tasks are not ended, the process returns to step S503 to start the next task. If all the tasks are ended, the process advances to step S511 to add a weight by conditions to the point of each task and calculate a cumulative point.
[0099] In step S507, the degree of fatigue of the user is calculated based on a change in the point and the like. If the degree of fatigue exceeds a predetermined threshold, the processing may automatically be ended based on the “stop condition”.
[0100] (Dual Task)
[0101] A healthy person simultaneously makes two or more actions such as “walk while talk” in a daily life. The “capability of simultaneously making two actions” declines with age. For example, “stop when talked during walking” or the like occurs. It is considered that not only “deterioration of the motor function” but also the “decline of capability of simultaneously making two actions” are related to the cause of fall of an aged person. In fact, even if it is judged that the motor function has sufficiently recovered by rehabilitation, many aged persons fall after coming home. One of the reasons is that rehabilitation is performed in a state in which the environment/condition for enabling concentration to rehabilitation actions is prepared. That is, in a living environment, factors that impede concentration to actions exist, and, for example, many actions are made under a condition that visibility is poor, an obstacle exists, or attention is paid to conversation.
[0102] It is therefore considered that it is important to perform rehabilitation that distracts attention and, more specifically, a dual task is preferably given when performing a training. The dual task training is a program effective to prevent not only fall of an aged person but also dementia.
[0103] The dual task training includes not only a training that combines a cognitive task and a motion task but also a training that combines two motion tasks.
[0104] The task described above with reference to
[0105] As another cognition task+motion task, a training of stretching the hand while sequentially subtracting 1 from 100 or a training of stretching the hand while avoiding spilling of water from a glass is possible.
[0106] If the evaluation lowers by about 20% in a dual task motion inspection as compared to a simple motion, the evaluator 214 notifies the display controller 212 to repeat the dual task.
[0107] (Other Dual Task Training Examples)
[0108] As shown in
[0109] Alternatively, a number may simply be displayed on an object, and only acquisition of an object of a large number may be evaluated,
[0110] The evaluator 214 may compare the point of a single task and the point of a dual task and evaluate the cognitive function using the point difference.
[0111] As a dual task simultaneously requesting two motor functions, for example, two foot pedals may be prepared, and collection of a target object may be possible only when one of the foot pedals is stepped. For a target object coming, from the right side, the user needs to stretch the right hand while stepping the foot pedal with the right foot. Hence, double motor functions of synchronizing the hand and foot are required. In addition, the user may he requested to make an avatar object on the opposite side of an avatar object on the object collecting side always touch a designated place.
[0112] As described above, according to this example embodiment, since the direction to which the user should direct the head can be recognized in advance by the radar screen image, even if the visual field is narrow, no confusion occurs in inner ear information, and PR motion sickness can be prevented. That is, it is possible to provide rehabilitation comfortable for the user during rehabilitation using a head mounted display.
[0113] Note that the position of the next object may be notified using a stereoscopic sound.
[0114] Note that the target object need not always vertically fall, and the user may touch a target that horizontally moves to the near side.
Third Example Embodiment
[0115] A rehabilitation support apparatus according to the third example embodiment of the present invention will be described next with reference to
[0116]
[0117] The operation screen 700 includes, on the left side, a score display region 732 that displays the positions of target objects that were made to appear in the past and their achieving counts (achieving ratios), a various parameter setting region 721, and a task history display region 741. The operation screen 700 also includes an input region 724 configured to decide the position of a target object that appears next, and a check box 725 that sets whether to accept an operation of a target object appearance position by a hot key.
[0118]
[0119] The operation screen 800 does not include a various parameter setting region, and includes a task history display region 841, a score display region 832 that displays the positions of target objects that were made to appear in the past and their achieving counts (achieving ratios), and a task list display region 833 of task sets to be generated later.
[0120]
[0121] Each task included in the task set 901 has, as parameters, a generation time 911, a task interval 912, a task position 913, a task angle 914, a task distance 915, a speed 916, a perfect determination criterion 917, a good determination criterion 918, and a catch determination criterion 919.
[0122]
[0123] In a totalization data display region 903, the total play time and the total numbers of tasks of the left and right hands are displayed.
[0124] In a new task registration region 904, a new task in which various kinds of parameters are set can be added to the task set 901. When all parameters such as the appearance position and speed are set, and a new registration button 941 is selected, one row is added to the task set 901 on the upper side. Here, the task is registered as task No. 39.
[0125] As described above, in this example embodiment, it is possible to provide a user-friendly rehabilitation support apparatus with higher operability.
Other Example Embodiments
[0126] While the invention has been particularly shown and described with reference to example embodiments thereof, the invention is not limited to these example embodiments. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the claims. A system or apparatus including any combination of the individual features included in the respective example embodiments may be incorporated in the scope of the present invention.
[0127] The present invention is applicable to a system including a plurality of devices or a single apparatus. The present invention is also applicable even when a rehabilitation support program for implementing the functions of example embodiments is supplied to the system or apparatus directly or from a remote site. Hence, the present invention also incorporates the program installed in a computer to implement the functions of the present invention by the computer, a medium storing, the program, and a WWW (World Wide Web) server that causes a user to download the program. Especially, the present invention incorporates at least a non-transitory computer readable medium storing a program that causes a computer to execute processing steps included in the above-described example embodiments.