Patent classifications
A61B5/744
Systems and methods for displaying images to patient residing on movable table during medical imaging or therapeutic procedures
Systems and methods are provided for delivering images to a patient before and/or during a medical procedure in which a patient is translated on a table relative to a gantry. In various example embodiments, images are projected to the patient while preserving the projected field size during table motion, thereby potentially reducing patient anxiety by providing a more immersive patient viewing experience. In some example embodiments, the projected field size is maintained by a display system that is secured to the table such that both a projector and a projection screen are fixed relative to the table, and relative to the patient, during translation of the table. In some example embodiments, a reduction in patient anxiety may be achieved by projecting images as virtual images that are perceived by the patient as residing at a depth that lies beyond the confined spatial region in which the patient resides.
SYSTEM FOR DISPLAYING MEDICAL MONITORING DATA
A patient monitoring hub can communicate bidirectionally with external devices such as a board-in-cable or a dongle. Medical data can be communicated from the patient monitoring hub to the external devices to cause the external devices to initiate actions. For example, an external device can perform calculations based on data received from the patient monitoring hub, or take other actions (for example, creating a new patient profile, resetting baseline values for algorithms, calibrating algorithms, etc.). The external device can also communicate display characteristics associated with its data to the monitoring hub. The monitoring hub can calculate a set of options for combined layouts corresponding to different external devices or parameters. A display option may be selected for arranging a display screen estate on the monitoring hub.
DISPLAY CONTROL METHOD, DISPLAY CONTROL DEVICE AND STORAGE MEDIUM
Provided is a display control method for a display control device including a processor and a storage. The method includes generating first display data for a display to display user marks at intervals so as to be added one by one as a time elapses from a timing. The intervals each correspond to a user's step length of a user obtained based on measured data on running or walking of the user. The user marks each represent a position where a foot of the user lands by one step. The method further includes generating second display data for the display to display reference marks at intervals so as to be added one by one as the time elapses. The intervals each correspond to a predetermined reference step length. The reference marks each represent a position where a reference foot lands by one step.
DEVICE AND METHOD FOR PROVIDING VISUAL PERCEPTUAL TRAINING
Disclosed is a device for providing visual perceptual training including an output module and a controller that controls the output module such that the output module outputs an instruction message informing a trainee of a rule related to visual perceptual training, provides a training session for the visual perceptual training, and stores a result in the training session for evaluating cognitive ability of the trainee, wherein the controller may, in the training session, control the output module such that the output module sequentially displays visual objects, check a type of a response of the trainee, determine whether the checked response is correct according to the rule including the first condition and the second condition based on the attribute of the visual object, the display order of the visual object, and the type of the checked response, and output a feedback indicating that the checked response is correct.
System and method for assessing cognitive and mood states of a real world user as a function of virtual world activity
Cognitive and mood states of a real world person are assessed according to activity in a virtual world environment with which the person interacts. The virtual world is configured to provide interactive experiences for assessing the person's cognitive and/or mood states. The system requires configuration of a session avatar during each virtual world session to provide then-current insight into the person's mood state. The system may require configuration of an avatar reflective of the person's state. The system requires the person to configure the virtual world environment during each virtual session to provide then-current insight into the person's mood state. The system permits the user to visit destinations, perform tasks and play games that are included in the environment for the purpose of providing insight into the person's cognitive and/or mood states according to the person's selections and/or performance.
Systems and methods for analyzing and treating learning disorders
Devices, systems, and methods are provided for analyzing and treating learning disorders using software as a medical device. A method may include identifying, by a device, application-based cognitive musical training (CMT) exercises associated with performance of software; receiving a first user input to generate a first sequence of the application-based CMT exercises; presenting a first application-based CMT exercise of the application-based CMT exercises based on the first sequence; receiving, during the presentation of the first application-based CMT exercise, a second user input indicative of a user interaction with the first application-based CMT exercise; generating, based on a comparison of the second user input to a performance threshold, a second sequence of the application-based CMT exercises, the first sequence different than the second sequence; and presenting a second application-based CMT exercise of the application-based CMT exercises based on the second sequence.
Apparatus and method for identification of wheezing in ausculated lung sounds
Described herein are a computer enhanced medical method and device for generating an asthmatic condition indication. The apparatus receives a lung signal from a stethoscope, the lung signal having been converted from an analog signal to a digital signal. Furthermore, circuitry included in the apparatus performs, inter alia, the following: displays a patient recording canvas corresponding to physical locations on a body of the patient, the canvas including an anterior patient orientation and a posterior patient orientation, generates a recording process, the recording process including recording, for a predetermined period of time, the detected lung signal, and associates the recording with a marked location. Furthermore, the circuitry merges the recorded lung signal from each marked location on the patent recording canvas as merged information, and applies processing to the merged information to generate the asthmatic condition indication.
Enhancing Exercise Through Augmented Reality
The disclosure relates to enhancing exercise through augmented reality. In particular, the disclosure describes monitoring a user's performance and generating a virtual representation of that user's performance to be displayed during a future exercise routine to motivate the user to improve performance during their next workout.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing device includes an input/output interface, and at least one processor. The at least one processor executes: detecting feature points in a facial image of an object included in an image taken from the input/output interface; acquiring first distance information between the feature points before a beauty treatment is performed; acquiring second distance information between the feature points at a second timing after the beauty treatment is performed; acquiring a difference value between the first distance information and the second distance information; and determining whether the beauty treatment is correctly given or has been given based on a different value between the acquired distance information.
Biosignal-based avatar control system and method
A biosignal-based avatar control system according to an embodiment of the present disclosure includes an avatar generating unit that generates a user's avatar in a virtual reality environment, a biosignal measuring unit that measures the user's biosignal using a sensor, a command determining unit that determines the user's command based on the measured biosignal, an avatar control unit that controls the avatar to perform the command, an output unit that outputs an image of the avatar in real-time, and a protocol generating unit for generating a protocol that provides predetermined tasks, and determines if the avatar performed the predetermined tasks. According to an embodiment of the present disclosure, it is possible to provide feedback in real-time by understanding the user's intention through analysis of biosignals and controlling the user's avatar in a virtual reality environment, thereby improving the user's brain function and motor function.