Patent classifications
A61B5/1128
METHODS AND SYSTEMS FOR FACILITATING INTERACTIVE TRAINING OF BODY-EYE COORDINATION AND REACTION TIME
A computer implemented method for facilitating training of body-eye coordination using a computing device having access to a camera is disclosed. The method includes receiving a training video of a player from the camera; superimposing a visual cue onto the training video; extracting a body posture flow of the player from the training video by performing a computer vision algorithm on one or more frames of the training video; determining whether the player has responded to the visual cue by analyzing the body posture flow of the player; and generating a feedback to the player in response to determining that the player has responded to the visual cue. Multi-player embodiments of the present invention are also disclosed.
Robotic interactions for observable signs of intent
Described herein are assistant robots that anticipate needs of one or more people (or animals). The assistant robots may recognize a current activity, knowledge of the person's routines, and contextual information. As such, the assistant robots can provide or offer to provide appropriate robotic assistance. The assistant robots can learn users' habits or be provided with knowledge regarding humans in its environment. The assistant robots develop a schedule and contextual understanding of the persons' behavior and needs. The assistant robots may interact, understand, and communicate with people before, during, or after providing assistance. The robot can combine gesture, clothing, emotional aspect, time, pose recognition, action recognition, and other observational data to understand people's medical condition, current activity, and future intended activities and intents.
Affective-cognitive load based digital assistant
Embodiments of the present disclosure sets forth a computer-implemented method comprising receiving, from at least one sensor, sensor data associated with an environment, computing, based on the sensor data, a cognitive load associated with a user within the environment, computing, based on the sensor data, an affective load associated with an emotional state of the user, determining, based on both the cognitive load at the affective load, an affective-cognitive load, determining, based on the affective-cognitive load, a user readiness state associated with the user, and causing one or more actions to occur based on the user readiness state.
DISCHARGE RISK AND MANAGEMENT
A method comprising receiving an input indicating intake information associated with a patient. Based on the input, the method further includes determining an initial discharge date and receiving mobility information associated with the patient. Based in part on the mobility information, the method further includes determining an estimated discharge date and a confidence metric associated with the estimated discharge date, determining that the estimated discharge date is later than the initial discharge date by more than a threshold period of time, and determining that the confidence metric is greater than a threshold metric. Based in part on the estimated discharge date being later than the initial discharge date by more than the threshold period of time and the confidence metric being greater than the threshold metric, the method further includes generating an alert.
VISION-BASED PATIENT STIMULUS MONITORING AND RESPONSE SYSTEM AND METHOD UTILIZING VISUAL IMAGES
Vision-based stimulus monitoring and response systems and methods are presented, wherein detection, via image(s) of a patient through an external stimulus, such as a caregiver, prompts analysis of the response of the patient, via secondary patient sensors or via analysis of patient image(s), to determine an autonomic nervous system (ANS) state.
SELF-ADAPTIVE MULTI-SCALE RESPIRATORY MONITORING METHOD BASED ON CAMERA
A self-adaptive multi-scale respiratory monitoring method based on a camera, relates to the technical field of video image signal identification processing, in order to solve the defect that the local optimal respiratory signal and the global optimal respiratory signal cannot be acquired by single image scale, a method was provided: (1) acquiring a respiratory monitoring object in real time;(2) performing multi-scale regular pre-segmentation on a video image, performing local respiratory signal identification and extraction on each unit area pre-segmented under each scale respectively, and defining the unit area with local respiratory signal output as a target area; and (3) comparing local respiratory signals extracted from the target area pre-segmented under each scale, determining an optimal segmentation scale, and taking a local respiratory signal extracted from the target area under the optimal segmentation scale as a monitoring respiratory signal output. The reliability is improved, and intelligent monitoring is realized.
DISPLAY OF MULTIPLE AUTOMATED ORTHODONTIC TREATMENT OPTIONS
Methods for generating multiple orthodontic treatment options for a digital 3D model of teeth in malocclusion. The method generates a plurality of different orthodontic treatment plans for the teeth and displays in a user interface the digital 3D model of teeth in malocclusion with a visual indication of each of the plurality of different orthodontic treatment plans. The visual indication of the treatment plans can be overlaid on the digital 3D model of teeth in malocclusion and possibly include aligners, brackets, or a combination of aligners and brackets. A doctor, technician, or other user can then select one of the treatment plans for a particular patient.
Robot and method for controlling the same
A robot according to the present disclosure comprises: a microphone; a camera disposed to face a predetermined direction; and a processor configured to: inactivate driving of the camera and activate driving of the microphone, if a driving mode of the robot is set to a user monitoring mode; acquire a sound signal through the microphone; activate the driving of the camera based on an event estimated from the acquired sound signal; confirm the event from the image acquired through the camera; and control at least one constituent included in the robot to perform an operation based on the confirmed event.
Reflective video display apparatus for interactive training and demonstration and methods of using same
A smart mirror can show live or recorded streaming video of an instructor performing a workout in a package that is attractive and unobtrusive enough to hang in a living room. The smart mirror includes a mirror surface with a fully reflecting section and a partially reflecting section. A display behind the partially reflecting section shows the video when the smart mirror is on and is almost invisible when the smart mirror is off. The smart mirror also has a speaker, a microphone, and a camera to enable a user to view the video content and interact with the instructor. The smart mirror may connect to the user's smart phone, a peripheral device (e.g., a Bluetooth speaker) to augment user experience, a biometric sensor to provide biometric data to assess user performance, and/or a network router to connect the smart mirror to a content provider, an instructor, and/or other users.
Systems and methods for diagnosing a stroke condition
A method for estimating a likelihood of a stroke condition of a subject, the method comprising: acquiring clinical measurement data pertaining to said subject, said clinical measurement data including at least one of image data, sound data, movement data, and tactile data; extracting from said clinical measurement data, potential stroke features according to at least one predetermined stroke assessment criterion; comparing said potential stroke features with classified sampled data acquired from a plurality of subjects, each positively diagnosed with at least one stroke condition, defining a positive stroke dataset; and determining, according to said comparing, a probability of a type of said stroke condition, and a probability of a corresponding stroke location of said stroke condition with respect to a brain location of said subject.