SYSTEM AND METHOD FOR SYNCHRONIZED NEURAL MARKETING IN A VIRTUAL ENVIRONMENT
20190286234 ยท 2019-09-19
Inventors
Cpc classification
A61B5/7285
HUMAN NECESSITIES
G06F3/011
PHYSICS
A61B2503/12
HUMAN NECESSITIES
A61B5/318
HUMAN NECESSITIES
A61B5/7445
HUMAN NECESSITIES
A61B5/11
HUMAN NECESSITIES
A61B5/0816
HUMAN NECESSITIES
A61B5/398
HUMAN NECESSITIES
G06F3/015
PHYSICS
A61B5/6803
HUMAN NECESSITIES
A61B5/7425
HUMAN NECESSITIES
A61B5/01
HUMAN NECESSITIES
International classification
A61B5/053
HUMAN NECESSITIES
A61B5/08
HUMAN NECESSITIES
A61B5/01
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
Abstract
A system and method for determining a user reaction to images and/or sounds, for example in a video stream, for example as related to an advertisement. Optionally, the system and method are able to determine the user reaction to at least viewing and preferably handling a physical object, for example through an AR (augmented reality) headset.
Claims
1. A physiological parameter measurement and motion tracking system comprising: a VR or AR display system to display information to a user; a physiological parameter sensing system comprising one or more sensing means configured to sense electrical activity in a brain of a user and to generate brain electrical activity information; a synchronizer to provide timestamps of said information displayed to the user and said brain electrical activity information, said synchronizer comprising a clock for determining said timestamps; and an analyzer arranged to receive the brain electrical activity information and the displayed information with said timestamps, to determine a reaction of the user to the displayed information according to the brain electrical activity information.
2. The system of claim 1, wherein said display information comprises a plurality of images and/or sounds.
3. The system of claim 2, wherein said display information comprises a video stream.
4. The system of 3, further comprising an advertising module for providing the display information to the display system as advertising information, wherein said analyzer determines a reaction of the user to said advertising information.
5. The system of claim 4, wherein said display system comprises an AR HMD through which a physical object is viewable, and which includes a video camera for recording when and how the user views the physical object, said synchronizer is configured to apply a timestamp to video data for determining when and how the user views the physical object, and said analyzer determines said reaction of the user also according to said timestamp of video data of when and how the user views the physical object.
6. A physiological parameter measurement and motion tracking system comprising: a VR or AR display system to display information to a user; a physiological parameter sensing system comprising (i) one or more sensing means configured to sense electrical activity in a brain of a user and to generate brain electrical activity information and (ii) one or more of an EMG sensor, EOG sensor, ECG sensor, body temperature sensor, galvanic skin sensor, and respiration sensor; and (iii) a signal acquisition module configured to acquire a signal from at least one of the EMG sensor, EOG sensor, ECG sensor, body temperature sensor, galvanic skin sensor, and respiration sensor; a synchronizer to provide timestamps of said information displayed to the user, said brain electrical activity information, and said signal from the at least one of the EMG sensor, EOG sensor, ECG sensor, body temperature sensor, galvanic skin sensor, and respiration sensor, said synchronizer comprising a clock for determining said timestamps; and an analyzer arranged to receive said brain electrical activity information, said signal from the at least one of the EMG sensor, EOG sensor, ECG sensor, body temperature sensor, galvanic skin sensor, and respiration sensor, and the displayed information with said timestamps, to determine a reaction of the user to the displayed information according to the brain electrical activity information.
7. A method for physiological parameter measurement, comprising: receiving display information configured for an HMD; receiving an EEG sensor signal; synchronizing, using a synchronizer module, the display information and the EEG sensor signal to generate synchronized data; storing, the synchronized data; and analyzing the synchronized data to determine a user reaction; wherein the synchronizing includes associating a timestamp with the display information and the EEG signal, the timestamp generated from a single clock module.
8. The method of claim 7, further comprising: receiving a signal from at least one of an EMG sensor, EOG sensor, ECG sensor, body temperature sensor, galvanic skin sensor, and respiration sensor; and wherein the synchronizing further includes associating the timestamp with the signal from the at least one of the EMG sensor, EOG sensor, ECG sensor, body temperature sensor, galvanic skin sensor, and respiration sensor.
9. The method of claim 7, further comprising: generating the display information using an advertising module.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0055] For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings in which:
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0080]
[0081] The sensing system 10 comprises one or more physiological sensors including at least brain electrical activity sensors, for instance in the form of electroencephalogram (EEG) sensors 22. The sensing system may comprise other physiological sensors selected from a group comprising electromyogram (EMG) sensors 24 connected to muscles in a user's body, electrooculography (EOG) sensors 25 (eye movement sensors), electrocardiogram (ECG) sensors 27, inertial sensors (INS) 29 mounted on the user's head and optionally on other body parts such as the user's limbs, body temperature sensor, and a galvanic skin sensor. The sensing system further comprises position and/or motion sensors to determine the position and/or the movement of a body part of the user. Position and motion sensors may further be configured to measure the position and/or movement of an object in the field of vision of the user. It may be noted that the notion of position and motion is related to the extent that motion can be determined from a change in position. In embodiments of the invention, position sensors may be used to determine both position and motion of an object or body part; or a motion sensor (such as an inertial sensor) may be used to measure movement of a body part or object without necessarily computing the position thereof. In an advantageous embodiment, at least one position/motion sensor comprises a camera 30 and optionally a distance sensor 28, mounted on a head set 18 (for example, as illustrated in
[0082] The stimulation system 17 comprises one or more stimulation devices including at least a visual stimulation system 32. The stimulation system may comprise other stimulation devices selected from a group comprising audio stimulation device 33, and functional electrical stimulation (FES) devices 31 connected to the user (for instance to stimulate nerves, or muscles, or parts of the user's brain e.g., to stimulate movement of a limb), and haptic feedback devices (for instance a robot arm that a user can grasp with his hand and that provides the user with haptic feedback). The stimulation system may further comprise Analogue to Digital Converters (ADC) 37a and Digital to Analogue Converters (DAC) 37b for transfer and processing of signals by a control module 51 of the control system. Devices of the stimulation system may further advantageously comprise means to generate content code signals 39 fed back to the control system 12 in order to timestamp said content code signals and to synchronize the stimulation signals with the measurement signals generated by the sensors of the sensing system.
[0083] The control system 12 comprises a clock module 106 and an acquisition module 53 configured to receive content code signals from the stimulation system and sensor signals from the sensing system and to time stamp these signals with a clock signal from the clock module 106. The control system 12 further comprises a control module 51 that processes the signals from the acquisition module and controls the output of the stimulation signals to devices of the stimulation system 17. The control module 51 further comprises a memory 55 to store measurement results, control parameters and other information useful for operation of the physiological parameter measurement and motion tracking system 10.
[0084] Generally, the visual/video content that is generated in the control system 12 is first pushed to a display register 35 (a final stage before the video content is activated on the display). In our design together with video content, the controller sends a code to a part of the register (say N bits) corresponding to one or more pixels (not too many pixels, so that the user is not disturbed; the corner pixels in the micro display are recommended as they may not be visible to user). The code will be defined by controller describing what exactly is the display content. Now using a clock signal the acquisition module 53 reads the code from the display register 35 and attaches a time stamp and sends to next modules. At the same moment EEG samples are also sampled and attached with the same time stamp. This way when EEG samples and the video code samples are arrived at the controller, these samples could be interpreted accordingly.
[0085] Note that all these modules are employed in one embedded system that has a single clock. This leads to the least latency as well as least jitter.
[0086] The same principle may be used for an audio stimulation as illustrated in
[0087] More generally, any kind of stimulation, as illustrated in
[0088]
[0089] The physiological parameter sensing system 14 comprises one or more sensors 20 configured to measure a physiological parameter of a user. In an advantageous embodiment the sensors 20 comprise one or more sensors configured to measure cortical activity of a user, for example, by directly measuring the electrical activity in a brain of a user. A suitable sensor is an electroencephalogram (EEG) sensor 22. EEG sensors measure electrical activity along the scalp, such voltage fluctuations result from ionic current flows within the neurons of the brain. An example of suitable EEG sensors is a g.tec Medical Engineering GmbH g.scarabeo.
[0090] In an advantageous embodiment, the sensors 22 are attached to a flexible cranial sensor support 27 which is made out of a polymeric material or other suitable material. The cranial sensor support 27 may comprise a plate 27a which is connected to a mounting strap 27b that extends around the head of the user, as shown in
[0091] In an advantageous embodiment, the size and/or arrangement of the cranial sensor support is adjustable to accommodate users with different head sizes. For example, the strap 27b may have adjustable portions or the cap may have adjustable portions in a configuration such as and adjustable strap found on a baseball cap.
[0092] In an advantageous embodiment, one or more sensors 20 may additionally or alternatively comprise sensors 24 configured to measure movement of a muscle of a user, for example by measuring electrical potential generated by muscle cells when the cells are electrically or neurologically activated. A suitable sensor is an electromyogram EMG sensor. The sensors 24 may be mounted on various parts of a body of a user to capture a particular muscular action. For example, for a reaching task, they may be arranged on one or more of the hand, arm and chest.
[0093] In an advantageous embodiment one or more sensors 20 may comprise sensors 25 configured to measure electrical potential due to eye movement. A suitable sensor is an electrooculography (EOG) sensor. In an advantageous embodiment, as shown in
[0094] The sensors 20 may alternatively or additionally comprise one or more of the following sensors: electrocorticogram (ECOG); electrocardiogram (ECG); galvanic skin response (GSR) sensor; respiration sensor; pulse-oximetry sensor; temperature sensor; single unit and multi-unit recording chips for measuring neuron response using a microelectrode system. It will be appreciated that sensors 20 may be invasive (for example ECOG, single unit and multi-unit recording chips) or non-invasive (for example EEG). Pulse-oximetry sensor is used for monitoring a user's oxygen saturation, usually placed on finger tip, and may be used to monitor the status of the user. It will be appreciated that for an embodiment with ECG and/or respiration sensors, the information provided by the sensors may be processes to enable tracking of progress of a user. The information may also be processed in combination with EEG information to predict events corresponding to a state of the user, such as the movement of a body part of the user prior to movement occurring. It will be appreciated that for an embodiment with GSR sensors, the information provided by the sensors may be processed to give an indication of an emotional state of a user. For example, the information may be used during the appended example to measure the level of motivation of a user during the task.
[0095] In an advantageous embodiment the physiological parameter sensing system 14 comprises a wireless transceiver which is operable to wirelessly transfer data sensory data to a wireless transceiver of the physiological parameter processing module 54. In this way the head set 18 is convenient to use since there are no obstructions caused by a wired connection.
[0096] Referring to
[0097] In an advantageous embodiment the sensors 26 comprise three cameras: two color cameras 28a, 28b and a depth sensor camera 30. However, in an alternative embodiment there is one color camera 28 and a depth sensor 30. A suitable color camera may have a resolution of VGA 640480 pixels and a frame rate of at least 60 frames per second. The field of view of the camera may also be matched to that of the head mounted display, as will be discussed in more detail in the following. A suitable depth camera may have a resolution of QQ VGA 160120 pixels. For example, a suitable device which comprises a color camera and a depth sensor is the Microsoft Kinect Suitable color cameras also include models from Aptina Imaging Corporation such as the AR or MT series.
[0098] In an advantageous embodiment two color cameras 28a and 28b and the depth sensor 30 are arranged on a display unit support 36 of the head set 18 (which is discussed in more detail below) as shown in
[0099] In an advantageous embodiment the position/motion detection system 14, sensing unit 14 comprises a wireless transceiver which is operable to wirelessly transfer data sensory data to a wireless transceiver of the skeletal tracking module 52. In this way the head set 18 is convenient to use since there are no obstructions caused by a wired connection.
[0100] Referring to
[0101] In the example of
[0102] In an alternative embodiment, the display unit 32 is separate from the head set. For example, the display means 34 comprises a monitor or TV display screen or a projector and projector screen.
[0103] In an advantageous embodiment part or all of the physiological parameter sensing system 14 and display unit 32 are formed as an integrated part of the head set 18. The cranial sensor support 27 may be connected to the display unit support 36 by a removable attachment (such as a stud and hole attachment, or spring clip attachment) or permanent attachment (such an integrally molded connection or a welded connection or a sewn connection). Advantageously, the head mounted components of the system 10 are convenient to wear and can be easily attached and removed from a user. In the example of
[0104] In an advantageous embodiment the system 10 comprises a head movement sensing unit 40. The head movement sensing unit comprises a movement sensing unit 42 for tracking head movement of a user as they move their head during operation of the system 10. The head movement sensing unit 42 is configured to provide data in relation to the X, Y, Z coordinate location and the roll, pitch, and yaw of a head of a user. This data is provided to a head tracking module, which is discussed in more detail in the following, and processes the data such that the display unit 32 can update the displayed VR images in accordance with head movement. For example, as the user moves their head to look to the left the displayed VR images move to the left. While such an operation is not essential it is advantageous in providing a more immersive VR environment. In order to maintain realism, it has been found that the maximum latency of the loop defined by movement sensed by the head movement sensing unit 42 and the updated VR image is 20 ms.
[0105] In an advantageous embodiment, the head movement sensing unit 42 comprises an acceleration sensing means 44, such as an accelerometer configured to measure acceleration of the head. In an advantageous embodiment, the sensor 44 comprises three in-plane accelerometers, wherein each in-plane accelerometer is arranged to be sensitive to acceleration along a separate perpendicular plate. In this way, the sensor is operable to measure acceleration in three-dimensions. However, it will be appreciated that other accelerometer arrangements are possible. For example, there may only be two in-plane accelerometers arranged to be sensitive to acceleration along separate perpendicular plates such that two-dimensional acceleration is measured. Suitable accelerometers include piezoelectric, piezoresistive, and capacitive variants. An example of a suitable accelerometer is the Xsens Technologies BV MTi 10-series sensor.
[0106] In an advantageous embodiment, the head movement sensing unit 42 further comprises a head orientation sensing means 47 which is operable to provide data in relation to the orientation of the head. Examples of suitable head orientation sensing means include a gyroscope and a magnetometer 48 which are configured to measure the orientation of a head of a user.
[0107] In an advantageous embodiment, the head movement sensing unit 42 may be arranged on the headset 18. For example, the movement sensing unit 42 may be housed in a movement sensing unit support 50 that is formed integrally with or is attached to the cranial sensor support 27 and/or the display unit support 36 as shown in
[0108] In an advantageous embodiment, the system 10 comprises an eye gaze sensing unit 100. The eye gaze sensing unit 100 comprises one or more eye gaze sensors 102 or sensing the direction of gaze of the user. In an advantageous embodiment, the eye gaze sensor 102 comprises one or more cameras arranged in operation proximity to one or both eyes of the user. Each camera 102 may be configured to track eye gaze by using the center of the pupil and infrared/near-infrared non-collimated light to create corneal reflections (CR). However, it will be appreciated that other sensing means may be used such as electrooculogram (EOG) or eye-attached tracking. The data from the movement sensing unit 42 is provided to an eye tracking module, which is discussed in more detail in the following, and processes the data such that the display unit 32 can update the displayed VR images in accordance with eye movement. For example, as the user moves their eyes to look to the left, the displayed VR images pan to the left. While such an operation is not essential, it is advantageous in providing a more immersive VR environment. In order to maintain realism, it has been found that the maximum latency of the loop defined by movement sensed by the eye gaze sensing unit 100 and the updated VR image is about 50 ms, however in an advantageous embodiment it is 20 ms or lower.
[0109] In an advantageous embodiment, the eye gaze sensing unit 100 may be arranged on the headset 18. For example, the eye gaze sensing unit 42 may be attached to the display unit support 36 as shown in
[0110] The control system 12 processes data from the physiological parameter sensing system 14 and the position/motion detection system 16, and optionally one or both of the head movement sensing unit 40 and the eye gaze sensing module 100, together with operator input data supplied to an input unit, to generate VR (or AR) data which is displayed by the display unit 32. To perform such a function, in the advantageous embodiment shown in
[0111] The skeletal tracking module 52 processes the sensory data from the position/motion detection system 16 to obtain joint position/movement data for the VR generation module 58. In an advantageous embodiment, the skeletal tracking module 52, as shown in
[0112] The sensors 26 of the position/motion detection system 16 provide data in relation to the position/movement of a whole or part of a skeletal structure of a user to the data fusion unit 62. The data may also comprise information in relation to the environment, for example the size and arrangement of the room the user is in. In the exemplary embodiment, wherein the sensors 26 comprise a depth sensor 30 and a color cameras 28a, 28b the data comprises color and depth pixel information.
[0113] The data fusion unit 62 uses this data, and the calibration unit 62, to generate a 3D point cloud comprising a 3D point model of an external surface of the user and environment. The calibration unit 62 comprises data in relation to the calibration parameters of the sensors 26 and a data matching algorithm. For example, the calibration parameters may comprise data in relation to the deformation of the optical elements in the cameras, color calibration and hot and dark pixel discarding and interpolation. The data matching algorithm may be operable to match the color image from cameras 28a and 28b to estimate a depth map which is referenced with respect to a depth map generated from the depth sensor 30. The generated 3D point cloud comprises an array of pixels with an estimated depth such that they can be represented in a three-dimensional coordinate system. The color of the pixels is also estimated and retained.
[0114] The data fusion unit 62 supplies data comprising 3D point cloud information, with pixel color information, together with color images to the skeletal tracking unit 64. The skeletal tracking unit 64 processes this data to calculate the position of the skeleton of the user and therefrom estimate the 3D joint positions. In an advantageous embodiment, to achieve this operation, the skeletal tracking unit can be organized into several operational blocks, for example: 1) segment the user from the environment using the 3D point cloud data and color images; 2) detect the head and body parts of the user from the color images; 3) retrieve a skeleton model of the user from 3D point cloud data; and 4) use inverse kinematic algorithms together with the skeleton model to improve joint position estimation. The skeletal tracking unit 64 outputs the joint position data to the VR generation module 58 which is discussed in more detail in the following. The joint position data is time stamped by a clock module such that the motion of a body part can be calculated by processing the joint position data over a given time period.
[0115] Referring to
[0116] The cortical activity is measured and recorded as the user performs specific body part movements/intended movements, which are instructed in the VR environment. Examples of such instructed movements are provided in the appended examples. To measure the cortical activity, the EEG sensors 22 are used to extract event related electrical potentials and event related spectral perturbations, in response to the execution and/or observation of the movements/intended movements which can be viewed in VR as an avatar of the user.
[0117] For example the following bands provide data in relation to various operations: slow cortical potentials (SCPs), which are in the range of 0.1-1.5 Hz and occur in motor areas of the brain provide data in relation to preparation for movement; mu-rhythm (8-12 Hz) in the sensory motor areas of the brain provide data in relation to the execution, observation and imagination of movement of a body part; beta oscillations (13-30 Hz) provide data in relation to sensory motor integration and movement preparation. It will be appreciated that one or more of the above potentials or other suitable potentials may be monitored. Monitoring such potentials over a period of time can be used to provide information in relation to the recovery or a user.
[0118] Referring to
[0119] In an advantageous embodiment, the physiological parameter processing module 54 comprises a re-referencing unit 66 which is arranged to receive data from the physiological parameter sensing system 14 and configured to process the data to reduce the effect of external noise on the data. For example, it may process data from one or more of the EEG, EOG, or EMG sensors. The re-referencing unit 66 may comprise one or more re-referencing blocks: examples of suitable re-referencing blocks include mastoid electrode average reference, and common average reference. In the example embodiment a mastoid electrode average reference is applied to some of the sensors and common average reference is applied to all of the sensors. However, it will be appreciated that other suitable noise filtering techniques may be applied to various sensors and sensor groups.
[0120] In an advantageous embodiment, the processed data of the re-referencing unit 66 may be output to a filtering unit 68. In an embodiment wherein there is no re-referencing unit, the data from the physiological parameter sensing system 14 is fed directly to the filtering unit 68, however. The filtering unit 68 may comprise a spectral filtering module 70 which is configured to band pass filter the data for one or more of the EEG, EOG, and EMG sensors. With respect to the EEG sensors, in an advantageous embodiment, the data is band-pass filtered for one or more of the sensors to obtain the activity on one or more of the bands: SCPs, theta, alpha, beta, gamma, mu, gamma, delta. In an advantageous embodiment, the bands SCPs (0.1-1.5 Hz), alpha and mu (8-12 Hz), beta (18-30 Hz) delta (1.5-3.5 Hz), theta (3-8 Hz) and gamma (30-100 Hz) are filtered for all of the EEG sensors. With respect to EMG and EOG sensors, similar spectral filtering may be applied but with different spectral filtering parameters. For example, for EMG sensors spectral filtering of a 30 Hz high pass cut off may be applied.
[0121] The filtering unit 68 may alternatively or additionally comprise a spatial filtering module 72. In an advantageous embodiment, a spatial filtering module 72 is applied to the SCPs band data from the EEG sensors (which is extracted by the spectral filtering module 70), however it may also be applied to other extracted bands. A suitable form of spatial filtering is spatial smoothing which comprises weighted averaging of neighboring electrodes to reduce spatial variability of the data. Spatial filtering may also be applied to data from the EOG and EMG sensors.
[0122] The filtering unit 68 may alternatively or additionally comprise a Laplacian filtering module 74, which is generally for data from the EEG sensors but may also be applied to data from the EOG and EMG sensors. In an advantageous embodiment, a Laplacian filtering module 72 is applied to each of the Alpha, Mu, and Beta band data of the EEG sensors which is extracted by the spectral filtering module 70. However, it may be applied to other bands. The Laplacian filtering module 72 is configured to further reduce noise and increase spatial resolution of the data.
[0123] The physiological parameter sensing system 14 may further comprise an event marking unit 76. In an advantageous embodiment, when the physiological parameter sensing system 14 comprises a re-referencing unit and/or a filtering unit 68, the event marking unit 76 is arranged to receive processed data from either or both of these units when arranged in series (as shown in the embodiment of
[0124] In an advantageous embodiment, the event marking unit 76 is configured to perform one or more of following operations: extract event-related potential data segments from the SCP band data; extract event related spectral perturbation marker data segments from alpha and beta or mu or gamma band data; extract spontaneous data segments from beta band data. In the aforementioned, spontaneous data segments correspond to EEG segments without an event marker, and are different to event related potentials, the extraction of which depends on the temporal location of the event marker.
[0125] The physiological parameter sensing system 14 may further comprise an artefact detection unit 78 which is arranged to receive the extracted data segments from the event marking unit 76 and is operable to further process the data segments to identify specific artefacts in the segments. For example, the identified artefacts may comprise 1) movement artefacts: the effect of a user movement on a sensor/sensor group; 2) electrical interference artefacts: interference, typically 50 Hz, from the mains electrical supply; 3) eye movement artefacts: such artefacts can be identified by the EOG sensors 25 of the physiological parameter sensing system 14; and the like. In an advantageous embodiment, the artefact detection unit 78 comprises an artefact detector module 80 which is configured to detect specific artefacts in the data segments. Such data segments can include, for example, an erroneous segment which requires deleting or a portion of the segment which is erroneous and requires removing from the segment. The advantageous embodiment further comprises an artefact removal module 82, which is arranged to receive the data segments from the event marking unit 76 and artefact detected from the artefact detector module 80 to perform an operation of removing the detected artefact from the data segment. Such an operation may comprise a statistical method such as a regression model which is operable to remove the artefact from the data segment without loss of the segment. The resulting data segment is thereafter output to the VR generation module 58, wherein it may be processed to provide real-time VR feedback which may be based on movement intention as will be discussed in the following. The data may also be stored to enable the progress of a user to be tracked.
[0126] In embodiments comprising other sensors, such as ECG, respiration sensors and GSR sensors, it will be appreciated that the data from such sensors can be processed using one of more of the above-mentioned techniques where applicable, for example: noise reduction; filtering; event marking to extract event relate data segments; artefact removal from extracted data segments; and the like.
[0127] The head tracking module 56 is configured to process the data from the head movement sensing unit 40 to determine the degree of head movement. The processed data is sent to the VR generation module 58, wherein it is processed to provide real-time VR feedback to recreate the associated head movement in the VR environment. For example, as the user moves their head to look to the left the displayed VR images move to the left.
[0128] The eye gaze tracking module 104 is configured to process the data from the eye gaze sensing unit 100 to determine a change in gaze of the user. The processed data is sent to the VR generation module 58, wherein it is processed to provide real-time VR feedback to recreate the change in gaze in the VR environment.
[0129] Referring now to
[0130] In an advantageous embodiment the VR generation module 58 may be organized into several units: an exercise logic unit 84; a VR environment unit 86; a body model unit 88; an avatar posture generation unit 90; a VR content integration unit 92; an audio generation unit 94; and a feedback generation unit 96. The operation of these units will now be discussed.
[0131] In an advantageous embodiment, the exercise logic unit 84 is operable to interface with a user input, such as a keyboard or other suitable input device. The user input may be used to select a particular task from a library of tasks and/or set particular parameters for a task. The appended example provides details of such a task.
[0132] In an advantageous embodiment, a body model unit 88 is arranged to receive data from the exercise logic unit 84 in relation to the particular part of the body required for the selected task. For example, this may comprise the entire skeletal structure of the body or a particular part of the body such as an arm. The body model unit 88 thereafter retrieves a model of the required body part, for example from a library of body parts. The model may comprise a 3D point cloud model, or other suitable model.
[0133] The avatar posture generation unit 90 is configured to generate an avatar based on the model of the body part from the body part model 88.
[0134] In an advantageous embodiment, the VR environment unit 86 is arranged to receive data from the exercise logic unit 84 in relation to the particular objects which are required for the selected task. For example, the objects may comprise a disk or ball to be displayed to the user.
[0135] The VR content integration unit may be arranged to receive the avatar data from the avatar posture generation unit 90 and the environment data from the VR environment unit 86 and to integrate the data in a VR environment. The integrated data is thereafter transferred to the exercise logic unit 58 and also output to the feedback generation unit 86. The feedback generation unit 86 is arranged to output the VR environment data to the display means 34 of the headset 18.
[0136] During operation of the task the exercise logic unit 84 receives data comprising joint position information from the skeletal tracking module 64, data comprising physiological data segments from the physiological parameter processing module 54 data from the body model unit 88 and data from the VR environment unit 86. The exercise logic unit 84 is operable to processes the joint position information data which is in turn sent to the avatar posture generation unit 90 for further processing and subsequent display. The exercise logic unit 84 may optionally manipulated the data so that it may be used to provide VR feedback to the user. Examples of such processing and manipulation include amplification of erroneous movement; auto correction of movement to induce positive reinforcement; mapping of movements of one limb to another; and the like.
[0137] As the user moves, interactions and/or collisions with the objects, as defined by the VR environment unit 86, in the VR environment, are detected by the exercise logic unit 84 to further update the feedback provided to the user.
[0138] The exercise logic unit 84 may also provide audio feedback. For example, an audio generation unit (not shown) may receive audio data from the exercise logic unit, which is subsequently processed by the feedback unit 94 and output to the user, for example, by headphones (not shown) mounted to the headset 18. The audio data may be synchronized with the visual feedback, for example, to better indicate collisions with objects in the VR environment and to provide a more immersive VR environment.
[0139] In an advantageous embodiment, the exercise logic unit 84 may send instructions to the physiological parameter sensing system 14 to provide feedback to the user via one or more of the sensors 20 of the physiological parameter sensing system 14. For example, the EEG 22 and/or EMG 24 sensors may be supplied with an electrical potential that is transferred to the user. With reference to the appended example, such feedback may be provided during the task. For example, at stage 5, wherein there is no arm movement, an electrical potential may be sent to EMG 24 sensors arranged on the arm and/or EEG sensors to attempt to stimulate the user into moving their arm. In another example, such feedback may be provided before initiation of the task, for instance, a set period of time before the task, to attempt to enhance a state of memory and learning.
[0140] In an advantageous embodiment, the control system comprises a clock module 106. The clock module may be used to assign time information to the data and various stages of input and output and processing. The time information can be used to ensure the data is processed correctly, for example, data from various sensors is combined at the correct time intervals. This is particularly advantageous to ensure accurate real-time processing of multimodal inputs from the various sensors and to generate real-time feedback to the user. The clock module 106 may be configured to interface with one or more modules of the control system to time stamp data. For example: the clock module 106 interfaces with the skeletal tracking module 52 to time stamp data received from the position/motion detection system 16; the clock module 106 interfaces with the physiological parameter processing module 54 to time stamp data received from the physiological parameter sensing system 14; the clock module 106 interfaces with the head tracking module 58 to time stamp data received from the head movement sensing unit 40; the clock module 106 interfaces with the eye gaze tracking module 104 to time stamp data received from the eye gaze sensing unit 100. Various operations on the VR generation module 58 may also interface with the clock module 106 to time stamp data, for example data output to the display means 34.
[0141] Unlike complex conventional systems that connect several independent devices together, in the present invention, synchronization occurs at the source of the data generation (for both sensing and stimulation), thereby ensuring accurate synchronization with minimal latency and, importantly, low jitter. For example, for a stereo head-mounted display with refresh rate of 60 Hz, the delay would be as small as 16.7 ms. This is not presently possible with a combination of conventional stand-alone or independent systems. An important feature of the present invention is that it is able to combine a heterogeneous ensemble of data, synchronizing them into a dedicated system architecture at source for ensuring multimodal feedback with minimal latencies. The wearable compact head mounted device allows easy recording of physiological data from brain and other body parts.
[0142] Synchronization Concept:
[0143] Latency or Delay (T): It is the time difference between the moment of user's actual action or brain state to the moment of its corresponding feedback/stimulation. It is a positive constant in a typical application. Jitter (AT) is the trial to trial deviation in Latency or Delay. For applications that require for instance immersive VR or AR, both latency T and jitter AT should be minimized to the least possible. Whereas in brain computer interface and offline applications, latency T can be compromised but jitter AT should be as small as possible.
[0144] Referring to
[0145] Design-I (
[0146] In this design, the moment at which a visual cue is supplied to user is registered directly in the computer while acquiring the EEG signal that is acquired via a USB connection or serial connection. Meaning, the computer assumes, the moment at which it is registered with acquired from user's brain is the moment a cue is displayed to the user. Note that there are inherent delays and jitters in this design. First due to the USB/serial port connectivity to computer, the registration of the sample into computer is has nonzero variable latency. Second, the moment the display command is released from the computer, it undergoes various delay due to underlying display driver, graphical processing unit, and signal propagation, which is also not a constant. Hence, these two kinds of delays add up and compromise alignment of visually evoked potentials.
[0147] Design-II (
[0148] To avoid the above problem, it is known to use a photo diode to measure the cue and synchronize its signal directly with an EEG amplifier. In this design, usually a photo-diode is placed on the display to sense a light. Usually, a cue is presented to user at the same time a portion of screen where the photo-diode is attached is lighted up. This way the moment at which the cue is presented is registered with photo-diode and supplied to EEG amplifier. This way EEG and visual cue information are directly synchronized at source. This procedure is accurate for alighting visually evoked trials, however, has a number of drawbacks: [0149] The number of visual cues it can code are limited to number of photodiodes. A typical virtual reality based visual stimulation would have large number of events to be registered together with physiological signals accurately. [0150] The use of photo-diode in a typical micro-display (e.g., 1 square inch size, with pixel density of 800600) of a head-mounted display would be difficult and even worse reduces usability. Note also that for the photo-diode to function, ample light should be supplied to the diode resulting in a limitation. [0151] The above drawbacks are further complicated when a plurality of stimuli (such as audio, magnetic, electrical, and mechanical) must be synchronized with plurality of sensors data (such as EEG, EMG, ECG, video camera, inertial sensors, respiration sensor, pulse oximetry, galvanic skin potentials, etc.).
[0152] In embodiments of the present invention, the above drawbacks are addressed to provide a system that is accurate and scalable to many different sensors and many different stimuli. This is achieved by employing a centralized clock system that supplies a time-stamp information and each sensor's samples are registered in relation to this to the time-stamp.
[0153] In an embodiment, each stimulation device may advantageously be equipped with an embedded sensor whose signal is registered by a synchronization device. This way, a controller can interpret plurality of sensor data and stimulation data can be interpreted accurately for further operation of the system.
[0154] In an embodiment, in order to reduce the amount of data to synchronize from each sensor, instead of using a real sensor, video content code from a display register may be read.
Example 1: Operation of System (10) in Exemplary Reach an Object Task
[0155] In this particular example an object 110, such as a 3D disk, is displayed in a VR environment 112 to a user. The user is instructed to reach to the object using a virtual arm 114 of the user. In the first instance the arm 114 is animated based on data from the skeletal tracking module 16 derived from the sensors of the position/motion detection system 16. In the second instance, wherein there is negligible or no movement detected by the skeletal tracking module 16, then the movement is based data relating to intended movement from the physiological parameter processing module 52 detected by the physiological parameter sensing system 14, and in particular the data may be from the EEG sensors 22 and/or EMG sensors 24.
[0156]
[0157] At stage 2, the exercise logic unit 84 initializes the task. This comprises steps of the exercise logic unit 84 interfacing with the VR environment unit 86 to retrieve the parts (such as the disk 110) associated with the selected task from a library of parts. The exercise logic unit 84 also interfaces with the body model unit 88 to retrieve, from a library of body parts, a 3D point cloud model of the body part (in this example a single arm 114) associated with the exercise. The body part data is then supplied to the avatar posture generation unit 90 so that an avatar of the body part 114 can be created. The VR content integration unit 92 receives data in relation to the avatar of the body part and parts in the VR environment and integrates them in a VR environment. This data is thereafter received by the exercise logic unit 84 and is output to the display means 34 of the headset 18 as shown in
[0158] At stage 3, the exercise logic unit 84 interrogates the skeletal tracking module 16 to determine whether any arm movement has occurred. The arm movement being derived from the sensors of the position/motion detection system 16 which are worn by the user. If a negligible amount of movement (for example, an amount less than a predetermined amount, which may be determined by the state of the user and location of movement) or no movement has occurred then stage 5 is executed, else stage 4 is executed.
[0159] At stage 4 the exercise logic unit 84 processes the movement data to determine whether the movement is correct. If the user has moved their hand 115 in the correct direction, for example, towards the object 110, along the target path 118, then stage 4a is executed and the color of the target path may change, for example it is colored green, as shown in
[0160] Following stage 4a and 4b stage 4c is executed, wherein the exercise logic unit 84 determines whether the hand 115 has reached the object 110. If the hand has reached the object, as shown in
[0161] At stage 5 the exercise logic unit 84 interrogates the physiological parameter processing module 52 to determine whether any physiological activity has occurred. The physiological activity is derived from the sensors of the physiological parameter sensing system module 14, which are worn by the user, for example the EEG and/or EMG sensors. EEG and EMG sensors may be combined to improve detection rates, and in the absence of a signal from one type of sensor a signal from the other type of sensor maybe used. If there is such activity, then it may be processed by the exercise logic unit 84 and correlated to a movement of the hand 115. For example, a characteristic of the event related data segment from the physiological parameter processing module 52, such as the intensity or duration of part of the signal, may be used to calculate a magnitude of the hand movement 115. Thereafter stage 6 is executed.
[0162] At stage 6a, if the user has successfully completed the task, then to provide feedback 116 to the user a reward score may be calculated, which may be based on the accuracy of the calculated trajectory of the hand 115 movement.
[0163] Thereafter, stage 6b is executed, wherein a marker strength of the sensors of the physiological parameter sensing system module 14, for example the EEG and EMG, sensors may be used to provide feedback 118.
[0164] As stage 8, if there is no data provided by either of the sensors of the physiological parameter sensing system module 14 or the sensors of the position/motion detection system 16 with in a set period of time then time out 122 occurs, as shown in
Example 2: Hybrid Brain Computer Interface with Virtual Reality Feedback with Head-Mounted Display, Robotic System, and Functional Electrical Stimulation
[0165] The physical embodiment illustrated in
[0166] The following paragraph describes a typical trial in performing a typical goal directed task, which could be repeated by the user several times to complete a typical training session. As shown in
[0167] An exemplary architecture of this system is illustrated in
[0168] Inputs of the System
[0169] Inertial measurement unit (IMU) sensors 29, for instance including an accelerometer, a gyroscope, a magneto-meter: Purpose, to track head movements. This data is used for rendering VR content as well as to segment EEG data where the data quality might be degraded due to movement. Camera system 30, 28: The camera system comprises a stereo camera 30, and a depth sensor 28. The data of these two sensors are combined to compute tracking data of a wearer's own movements of upper limbs, and for tracking wearer's own arm movements. These movements are then used in animating the avatar in the virtual reality on micro displays 32 and in detecting if there was a goal directed movements, which is then used for triggering feedback through display 32, robot 41, and stimulation device FES 31. Sensors EEG 22 and EMG 24 are used for inferring if there was an intention to make a goal directed movement.
[0170] Outputs of the System/Feedback Systems [0171] Micro-displays 34 of headset 18: Renders 2D/3D virtual reality content, where a wearer experiences the first-person perspective of the virtual world as well as of his own avatar with its arms moving in relation to his own movements. [0172] Robotic system 41: Robotic system described in this invention is used for driving movements of the arm, where the user holds a haptic knob. The system provides a range of movements as well as haptic feedback of natural movements of activities of daily living. [0173] Functional Electrical Stimulation (FES) device 31: Adhesive electrodes of FES system are placed on user's arms to stimulate nerves, which up on activated can restore the lost voluntary movements of the arm. Additionally, the resulting movements of the hand results in kinesthetic feedback to the brain.
[0174] Data Processing
[0175] The following paragraphs describe the data manipulations from inputs till outputs.
[0176] Acquisition Unit 53: The description of acquisition unit 53 ensures near perfect synchronization of inputs/sensor data and outputs/stimulation/feedback of the system as illustrated in the
[0177] The acquisition unit 53 aims at solving the issue of synchronization of inputs and outputs accurately. In achieving so, the outputs of the system are sensed either with dedicated sensors or indirectly recorded from a stage before stimulation, for instance as follows: [0178] Sensing the micro-display: Generally, the video content that is generated in the control system is first pushed to a display register 35 (a final stage before the video content is activated on the display). Together with video content, the controller sends a code to a part of the register (say N bits) corresponding to one or more pixels (not too many pixels, so that the user is not disturbed). The corner pixels in the micro display are preferred as they may not be visible to user. The codes (a total of 2 N) may be defined by the controller or the exercise logic unit describing the display content. [0179] Sensing FES: The FES data can be red from its last stage of generation, i.e., from the DAC. [0180] Sensing Robot's movements: The robots motors are embedded with sensors providing information on angular displacement, torque, and other control parameters of the motors.
[0181] Now using a clock signal with preferably a much higher frequency than that of the inputs and outputs (e.g., 1 GHz), but at least double the highest sampling frequency among sensors and stimulation units, the acquisition module 53 reads the sensor samples and attaches a time stamp as illustrated in the
[0182] Physiological Data Analysis
[0183] The physiological data signals EEG and EMG are noisy electrical signals and preferably are pre-processed using appropriate statistical methods. Additionally, the noise can also be reduced by better synchronizing the events of stimulation and behavior with the physiological data measurements with negligible jitter.
[0184]
[0185] These EEG segments are then fed to feature extraction unit 69, where temporal correction is first made. One simple example of temporal correction is removal of baseline or offset from the trial data from a selected spectral band data. The quality of these trials is assessed using statistical methods such as Outliers detection. Additionally, if there is a head movement registered through IMU sensor data, the trials are annotated as artefact trials. Finally, features are computed from each trial that well describe the underlying neural processing. These features are then fed to a statistical unit 67.
[0186] Similarly, the EMG electrode samples are first spectrally filtered, and applied a spatial filter. The movement information is obtained from the envelope or power of the EMG signals. Similar to EEG trials, EMG spectral data is segmented and passed to feature extraction unit 69. The output of EMG feature data is then sent to statistical unit 67.
[0187] The statistical unit 67 combines various physiological signals and motion data to interpret the intention of the user in performing a goal directed movement. This program unit includes mainly machine learning methods for detection, classification, and regression analysis in interpretation of the features. The outputs of this module are intention probabilities and related parameters which drive the logic of the exercise in the exercise logic unit 84. This exercise logic unit 84 generates stimulation parameters which are then sent to a feedback/stimulation generation unit of the stimulation system 17.
[0188] Throughout these stages, it is ensured to have minimal lag and more importantly least jitter.
[0189] Event Detection & Event Manager
[0190] Events such as the moment at which the user is stimulated or presented an instruction in the VR display, the moment at which the user performed an action are necessary for the interpretation of the physiological data.
[0191] IMU data provides head movement information. This data is analyzed to get events such as user moving head to look at the virtual door knob.
[0192] The video display codes correspond to the video content (e.g., display of virtual door knob, or any visual stimulation). These codes also represent visual events. Similarly, FES stimulation events, Robot movement and haptic feedback events are detected and transferred into event manager 71. Analyzer modules 75, including a movement analyzer 75a, an IMU analyzer 75b, an FES analyzer 75c, and a robot sensor analyzer 75d process the various sensor and stimulation signals for the event manager 71.
[0193] The event manager 71 then sends these events for tagging the physiological data, motion tracking data, etc. Additionally, these events also are sent to exercise logic unit for adapting the dynamics of exercise or challenges for the user.
[0194] Other Aspects of Control System
[0195] The control system interprets the incoming motion data, intention probabilities from the physiological data and activates exercise logic unit and generates stimulation/feedback parameters. The following blocks are main parts of the control system. [0196] VR feedback: The motion data (skeletal tracking, object tracking, and user tracking data) is used for rendering 3D VR feedback on the head-mounted displays, in form of avatars and virtual objects. [0197] Exercise logic unit 84: The exercise logic unit implements sequence of visual display frames including instructions and challenges (target task to perform, in various difficulty levels) to the user. The logic unit also reacts to the events of the event manager 71. Finally, this unit sends stimulation parameters to the stimulation unit. [0198] Robot & FES stimulation generation unit: this unit generates inputs required to perform a targeted movement of the robotic system 41 and associated haptic feedback. Additionally, stimulation patterns (current intensity and electrode locations) for the FES module could be made synchronous and congruent to the user.
Example 3: Brain Computer Interface and Motion Data Activated Neural Stimulation with Augmenter Reality Feedback
Objective
[0199] A system that can provide precise neural stimulation in relation to the actions performed by a user in real world, resulting in reinforcement of neural patterns for intended behaviors.
Description
[0200] Actions of the user and that of a second person and objects in the scene are captured with a camera system for behavioral analysis. Additionally, neural data is recorded with one of the modalities (EEG, ECOG, etc.) are synchronized with IMU data. The video captured from the camera system is interleaved with virtual objects to generate 3D augmented reality feedback and provided to the user though head-mounted display. Finally, appropriate neural stimulation parameters are generated in the control system and sent to the neural stimulation.
[0201] Delay and jitter between user's behavioral and physiological measures and neural stimulation should be optimized for effective reinforcement of the neural patterns.
[0202] The implementation of this example is similar to Example 2, except that the head mounted display (HMD) displays Augmented Reality content instead of Virtual Reality (see
Example 4: Applications to Neural Marketing
[0203]
[0204] The user also preferably wears an HMD (head mounted display) 1506, which in this non-limiting example is for VR (virtual reality). A display controller 1508 feeds instructions and data to HMD 1506, to determine what the user views. Display controller 1508 and HMD 1506 may optionally be embodied in a single device or in a plurality of such devices.
[0205] Optionally display controller 1508 comprises a processor 1509 and a memory 1511. As used herein, a processor such as processor 1509 generally refers to a device or combination of devices having circuitry used for implementing the communication and/or logic functions of a particular system. For example, a processor may include a digital signal processor device, a microprocessor device, and various analog-to-digital converters, digital-to-analog converters, and other support circuits and/or combinations of the foregoing. Control and signal processing functions of the system are allocated between these processing devices according to their respective capabilities. The processor may further include functionality to operate one or more software programs based on computer-executable program code thereof, which may be stored in a memory, such as memory 1511 in this non-limiting example. As the phrase is used herein, the processor may be configured to perform a certain function in a variety of ways, including, for example, by having one or more general-purpose circuits perform the function by executing particular computer-executable program code embodied in computer-readable medium, and/or by having one or more application-specific circuits perform the function.
[0206] To provide synchronization between the information that the user views and the user's reaction to viewing such information, as noted above the acquired signals, such as EEG signals, are timestamped according to a timing that is synchronized with the same timestamp being applied to the flow of data to HMD 1506. A synchronization module 1510 provides such timestamp synchronization according to a clock 1512. Synchronization module 1510 communicates with signal acquisition module 1504 and display controller 1508, to provide timestamps for the data flowing through each of signal acquisition module 1504 and display controller 1508.
[0207] Data from signal acquisition module 1504 is optionally stored in a database A 1518 with the previously described timestamp, while data flowing through display controller 1508 is optionally stored in a database B 1518 with the previously described timestamp. A synchronized data analysis module 1516 optionally receives such synchronization information directly from synchronization module 1510 and may also receive data streams from one or both of signal acquisition module 1504 and display controller 1508.
[0208] Additionally, or alternatively, synchronized data analysis module 1516 may receive such data streams from each of databases A and B 1518. Preferably, synchronized data analysis module 1516 is in communication with an advertising module 1514, to determine which advertisements correspond to the data input to display controller 1508. An advertisement may be defined according to one or more images, or one or more sounds, a story comprising a plurality of such images and sounds, and so forth. The advertisement may also be defined according to a plurality of parameters that relate to a specific product or service being sold, a category of such products and services, and so forth. The image may be a logo or other icon.
[0209] Optionally, advertising module 1514 may be used to provide a game for display, preferably for a game with advertisements and/or to test the pace of a game and/or a new game character or game level.
[0210] Optionally synchronized data analysis module 1516 is able to determine the reaction of the user to information displayed by HMD 1506 according to an analysis of the EEG signals, as described for example in US Patent Publ. 20110282231, hereby incorporated by reference as if fully set forth herein.
[0211] Optionally the EEG sensors and HMD may be implemented according to any of the above Figures.
[0212]
[0213] Next, in 1556, the information displayed in the HMD and the EEG signals are synchronized by a synchronizer with a timestamp. The synchronizer preferably operates according to a clock as previously described. HMD information and EEG signals are optionally stored with timestamps in 1558. Preferably, the reaction of the user to the information being displayed on the HMD is determined according to the EEG signals, such as for example the reaction of the user to a product (virtually displayed) or to an advertisement, in 1560.
[0214]
[0215] In addition, preferably a physical object 1620 is at least visible to the user through AR HMD 1606, as indicated by the dotted line. Optionally the user is able to handle physical object 1620. Also, preferably, video data regarding how and when the user views physical object 1620 is recorded, for example by HMD 1606, or alternatively or additionally by another video camera (not shown). This information preferably also receives timestamps by synchronization module 1510 and is preferably stored with the timestamps in database B 1518. Preferably, synchronized data analysis 1516 is able to correlate how and when the user views physical object 1620 with the EEG signals, for example to determine the user reaction to the object and/or to information being displayed by HMD 1606.
[0216]
[0217] Next, in 1656, the video data of the user at least viewing (if not actually handling) the object, information displayed in the HMD and the EEG signals are synchronized by a synchronizer with a timestamp. The synchronizer preferably operates according to a clock as previously described. HMD information, user viewing information and EEG signals are optionally stored with timestamps in 1658. Preferably, the reaction of the user to the object, the information being displayed on the HMD is determined according to the EEG signals, such as for example the reaction of the user to a product (virtually displayed) or to an advertisement, in 1660.
[0218] Any and all references to publications or other documents, including but not limited to, patents, patent applications, articles, webpages, books, etc., presented in the present application, are herein incorporated by reference in their entirety.
[0219] Example embodiments of the devices, systems and methods have been described herein. As noted elsewhere, these embodiments have been described for illustrative purposes only and are not limiting. Other embodiments are possible and are covered by the disclosure, which will be apparent from the teachings contained herein. Thus, the breadth and scope of the disclosure should not be limited by any of the above-described embodiments but should be defined only in accordance with claims supported by the present disclosure and their equivalents. Moreover, embodiments of the subject disclosure may include methods, systems and devices which may further include any and all elements from any other disclosed methods, systems, and devices, including any and all elements corresponding to systems, methods, and apparatuses/device for tracking a body or portions thereof. In other words, elements from one or another disclosed embodiment may be interchangeable with elements from other disclosed embodiments. In addition, one or more features/elements of disclosed embodiments may be removed and still result in patentable subject matter (and thus, resulting in yet more embodiments of the subject disclosure). Correspondingly, some embodiments of the present disclosure may be patentably distinct from one and/or another reference by specifically lacking one or more elements/features. In other words, claims to certain embodiments may contain negative limitation to specifically exclude one or more elements/features resulting in embodiments which are patentably distinct from the prior art which include such features/elements.