DEVICE WITH A DETECTION UNIT FOR THE POSITION AND ORIENTATION OF A FIRST LIMB OF A USER

20200146618 ยท 2020-05-14

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a device comprising a detection unit (1) for the position and orientation of a first limb of a user (3) and a display device (4) for displaying a second limb (5). In order to design a device of the type described above so as to expand the range of motion for the two limbs of the user and at the same time allow interaction between the two limbs and between one of the limbs and a virtual three-dimensional object, it is proposed that the display device is designed stereoscopically and has, with the detection unit, a common reference plane as a reference system, in that, for display via the stereoscopic display device, a first generator is provided for a virtual second limb which is mirrored about the reference plane with respect to the position and orientation of the first limb, and a second generator is provided for a three-dimensional interaction object for display at a predetermined position and orientation with respect to the reference plane, and in that the device has a collision detection unit for outputting a signal upon detection of a collision between the virtual second limb and the virtual three-dimensional interaction object.

    Claims

    1. A device comprising: a detection unit detecting a position and orientation of a first limb of a user; and a display device displaying a second limb; wherein the display device is of stereoscopic design and has a reference plane as a reference system in common with the detection unit; wherein a first generator generating a virtual second limb mirrored about the reference plane with respect to the position and orientation of the first limb and a second generator generating a three-dimensional interaction object displayed at a predetermined position and orientation with respect to the reference plane via the stereoscopic display device, and wherein the device has a collision detection unit outputting a signal upon detection of a collision between the virtual second limb and the virtual three-dimensional interaction object.

    2. A device according to claim 1, wherein sensors detect an electromyogram of muscle groups of the second limb of the user, and said sensors are connected via a control unit to the first generator and change the position and orientation of the virtual second limb as a function of detected action potentials.

    3. A device according to claim 1 wherein the collision detection unit changes the position and orientation of the virtual second limb as a function of detected collisions and is connected to the first generator.

    4. A device according to claim 1, wherein the detection unit comprises a depth sensor to which the first generator generating a virtual model of the second limb at a position and orientation of the first limb mirrored about the reference plane as the virtual second limb is connected.

    5. A device according to claim 1, wherein the first generator has an interaction element memory and is connected to a gesture recognition unit selecting one of a plurality of interaction elements from the interaction element memory as a function of a gesture when recognized.

    6. A method for operating a device according to claim 1, said method comprising detecting the position and orientation of the first limb of the user relative to the reference plane corresponding to a sagittal plane of the user; and feeding the position and orientation of the first limb to the first generator; and generating with the first generator the virtual second limb, mirrored with respect to the position and orientation of the first limb about the reference plane; displaying display the virtual second limb via a stereoscopic display device; generating with the second generator the three-dimensional interaction object at a predetermined position and orientation relative to the reference plane; and outputting from the collision detection unit a signal upon detection of a collision between the virtual second limb and the virtual three-dimensional interaction object.

    7. A method according to claim 6, wherein the second generator generates the three-dimensional interaction element only if the first or second limb falls below a predetermined minimum distance to the reference plane.

    8. A method according to claim 6, wherein a gesture recognition unit recognizes a gesture performed with the first or second limb and the second generator selects said three-dimensional interaction element from an interaction element memory as a function of the recognized gesture.

    9. A device according to claim 2 wherein the collision detection unit changes the position and orientation of the virtual second limb as a function of detected collisions and is connected to the first generator.

    10. A device according to claim 9, wherein the detection unit comprises a depth sensor to which the first generator generating a virtual model of the second limb at a position and orientation of the first limb mirrored about the reference plane as the virtual second limb is connected.

    11. A device according to claim 2, wherein the detection unit comprises a depth sensor to which the first generator generating a virtual model of the second limb at a position and orientation of the first limb mirrored about the reference plane as the virtual second limb is connected.

    12. A device according to claim 3, wherein the detection unit comprises a depth sensor to which the first generator generating a virtual model of the second limb at a position and orientation of the first limb mirrored about the reference plane as the virtual second limb is connected.

    13. A device according to claim 2, wherein the first generator has an interaction element memory and is connected to a gesture recognition unit selecting one of a plurality of interaction elements from the interaction element memory as a function of a gesture when recognized.

    14. A device according to claim 3, wherein the first generator has an interaction element memory and is connected to a gesture recognition unit selecting one of a plurality of interaction elements from the interaction element memory as a function of a gesture when recognized.

    15. A device according to claim 4, wherein the first generator has an interaction element memory and is connected to a gesture recognition unit selecting one of a plurality of interaction elements from the interaction element memory as a function of a gesture when recognized.

    16. A device according to claim 9, wherein the first generator has an interaction element memory and is connected to a gesture recognition unit selecting one of a plurality of interaction elements from the interaction element memory as a function of a gesture when recognized.

    17. A device according to claim 10, wherein the first generator has an interaction element memory and is connected to a gesture recognition unit selecting one of a plurality of interaction elements from the interaction element memory as a function of a gesture when recognized.

    Description

    BRIEF DESCRIPTION OF THE INVENTION

    [0012] The drawings show the subject matter of the invention by way of example, wherein:

    [0013] FIG. 1 shows a device according to the invention operated by a user in a schematic plan view, and

    [0014] FIG. 2 shows a schematic view of the display presented to the user in a two-dimensional simplified representation.

    DESCRIPTION OF THE PREFERRED EMBODIMENTS

    [0015] A device according to the invention comprises a detection unit 1 for the position and orientation of a first limb 2 of a user 3 and a stereoscopic display device 4 for displaying a virtual second limb 5. The detection unit 1 and the stereoscopic display device 4 have a common reference plane 6 as a reference system. The virtual second limb 5 is generated by a first generator 7 connected to the detection unit 1 for this purpose. In addition, a second generator 8 is planned for a three-dimensional interaction object 9. To detect a collision between the virtual second limb 5 and the virtual three-dimensional interaction object 9, a collision detection unit 10 is provided, which outputs a signal via a loudspeaker 11 when a collision is detected. For this purpose, the collision detection unit 10 is connected to both the first generator 7 and the second generator 8. A display control unit 12 is provided for the stereoscopic display device 4, which processes the output signals of the first generator 7 and the second generator 8 and thus controls the stereoscopic display device 4.

    [0016] In order to be able to change the position and orientation of the virtual second limb 5 as a function of the action potentials of individual muscle groups of the real second limb 13, the device according to invention comprises, in a preferred embodiment, sensors 14 for detecting the electromyogram of muscle groups of the second limb 13, wherein the sensors 14 are connected via a control unit 15 to the first generator 7 for changing the position and orientation of the virtual second limb 5 as a function of the action potentials detected via the sensors 14.

    [0017] The collision detection unit 10 can also be connected to the first generator 7 to change the position and orientation of the virtual second limb 5 as a function of detected collisions.

    [0018] In the embodiment example shown, detection unit 1 also includes a depth sensor which is not shown in detail and which is also connected to the first generator 7. This generates not only a virtual model of the second limb 5, but also a virtual model of the first limb 16, so that in the stereoscopic view the user has a complete virtual space with a first virtual limb 16, a second virtual limb 5 and a virtual three-dimensional interaction element 9.

    [0019] To select an interaction element from an unspecified interaction element memory in response to a gesture, the first generator 7 is connected to a gesture recognition unit 17 which recognizes the gesture performed by users with the respective limb either via the detection unit 1 or already via the generated models of the first or second limb 5, 16, whereupon the first generator selects a corresponding virtual interaction element 9 in response to the recognized gesture.