Patent classifications
A61B5/0013
DISCHARGE RISK AND MANAGEMENT
A method comprising receiving an input indicating intake information associated with a patient. Based on the input, the method further includes determining an initial discharge date and receiving mobility information associated with the patient. Based in part on the mobility information, the method further includes determining an estimated discharge date and a confidence metric associated with the estimated discharge date, determining that the estimated discharge date is later than the initial discharge date by more than a threshold period of time, and determining that the confidence metric is greater than a threshold metric. Based in part on the estimated discharge date being later than the initial discharge date by more than the threshold period of time and the confidence metric being greater than the threshold metric, the method further includes generating an alert.
VISION-BASED PATIENT STIMULUS MONITORING AND RESPONSE SYSTEM AND METHOD UTILIZING VISUAL IMAGES
Vision-based stimulus monitoring and response systems and methods are presented, wherein detection, via image(s) of a patient through an external stimulus, such as a caregiver, prompts analysis of the response of the patient, via secondary patient sensors or via analysis of patient image(s), to determine an autonomic nervous system (ANS) state.
System and Method for Correcting for Distortions of a Diagnostic Image
A method for correcting a geometric distortion in a diagnostic image, said method comprising the steps of: receiving a segmented volumetric image and a surface scan image, corresponding to a maxillofacial anatomy of a patient; aligning the surface scan mesh to a volumetric image; and applying a transformation to the surface scan image mesh in which the geometry of a HV-LD is altered yielding the surface scan image mesh with tooth crowns closely corresponding to the volumetric image teeth image resulting in a correction for the geometric distortion.
INTELLIGENT SYSTEM FOR SEARCH AND RESCUE IN SPECIAL ENVIRONMENT SUCH AS DISASTER
Provided is an intelligent system for search and rescue in a special environment such as a disaster, including a body surface feature extracting apparatus, a vital sign extracting apparatus, a speech feature extracting apparatus and a network transmission apparatus that are successively in communication connection with one another. The body surface feature extracting apparatus uses a gated recurrent unit (GRU) network model for transmission and storage. The speech feature extracting apparatus includes a sound collecting module, a sound feature extracting module and a sound analyzing and processing module that are successively in communication connection with one another, with the sound analyzing and processing module being provided with a noise database comprising a plurality of ambient sounds. The network transmission apparatus includes a Zigbee network communication module, a network transmission module, a drone network relay module and a network receiving base station that are successively in communication connection with one another.
Mesh network personal emergency response appliance
A monitoring system a user activity sensor to determine patterns of activity based upon the user activity occurring over time.
IMAGE CAPTURE SYSTEMS AND METHODS FOR IDENTIFYING ABNORMALITIES USING MULTISPECTRAL IMAGING
A method for identifying a skin abnormality including using an imaging device, the imaging device including a lighting member for directing light toward a target surface, the lighting member including a plurality of lighting elements, and a filter member positioned between the lighting member and the target surface, the filter member including a plurality of filter elements, capturing a plurality of images of the target surface, each of the plurality of images captured when illuminating a different one or more of the plurality of lighting elements, compiling the plurality of images into a data package, transmitting the data package to a server for processing the data package, and determining, at the server, a presence of an abnormality.
SYSTEM AND METHODS OF CAPTURING MEDICAL IMAGING DATA USING A MOBILE DEVICE
Methods for capturing medical images associated with a patient using a mobile image-capturing device are disclosed. One example method includes accessing patient identifying information. A user may launch a scan application installed in the mobile image-capturing device by visually scanning a machine-readable optical label, which may also authenticate the user. Upon accessing the scan application, the scan application may be provided with the patient identifying information scanned from the machine-readable optical label, and one or more details of the patient based on the patient identifying information scanned from the machine-readable optical label may be displayed at an interface of the scan application. The scan application may capture medical images using an imaging equipment of the mobile image-capturing device, associate the captured images with the patient identifying information, and transmit the captured images with the patient identifying information to a storage location.
CONNECTED BODY SURFACE CARE MODULE
A wearable treatment and analysis module is provided. The module is positioned on or near a body surface region of interest. The module provides remote access to sensor data, treatment administration, and/or other health care regimens via a network connection with a user device and/or management system.
MINIATURIZED MOBILE, LOW COST OPTICAL COHERENCE TOMOGRAPHY SYSTEM FOR HOME BASED OPHTHALMIC APPLICATIONS
Improved optical coherence tomography systems and methods to measure thickness of the retina are presented. The systems may be compact, handheld, provide in-home monitoring, allow the patient to measure himself or herself, and be robust enough to be dropped while still measuring the retina reliably.
Systems and methods for evaluating human eye tracking
Systems and methods are disclosed for evaluating human eye tracking. One method includes receiving data representing the location of and/or information tracked by an individual's eye or eyes before, during, or after the individual performs a task; identifying a temporal phase or a biomechanical phase of the task performed by the individual; identifying a visual cue in the identified temporal phase or biomechanical phase; and scoring the tracking of the individual's eye or eyes by comparing the data to the visual cue.