Cooperative health intelligent emergency response system for cooperative intelligent transport systems
12125117 ยท 2024-10-22
Assignee
Inventors
- Moayad ALOQAILY (Abu Dhabi, AE)
- Haya ELAYAN (Abu Dhabi, AE)
- Mohsen GUIZANI (Abu Dhabi, AE)
- Fakhri KARRAY (Abu Dhabi, AE)
Cpc classification
A61B5/747
HUMAN NECESSITIES
G16H50/20
PHYSICS
A61B5/26
HUMAN NECESSITIES
International classification
A61B5/00
HUMAN NECESSITIES
A61B5/26
HUMAN NECESSITIES
Abstract
A system, method and computer readable medium for emergency health response, including sensors for measuring health conditions of a user, a local machine learning device to predict abnormalities in health status of the user based on the measurements, a communications device for transmitting an emergency alert message to emergency response providers that are within range of the communications device, and for receiving response messages from emergency response providers that are available to provide emergency treatment. A health condition controller selecting a provider. When the provider is a hospital, the subject vehicle will set its destination to the hospital and will transmit health status information of the user to the provider. When the provider is an emergency response vehicle, the subject vehicle will communicate coordinates as a meeting destination for meeting the provider response vehicle and will transmit health status information of the user to the provider response vehicle.
Claims
1. A system for emergency health response, comprising: a plurality of health monitoring devices including sensors for measuring ECG, heart rate, skin color, motion, respiratory rate, and oxygen level of a user that is inside a subject vehicle; a local machine learning device to predict abnormalities in a health status of the user that are indications of an impending emergency health condition based on measurements by the plurality of health monitoring devices, wherein the abnormalities in the user's health status and associated sensor data are stored in a distributed blockchain; a long-range communications device connected to the local machine learning device configured to transmit an emergency alert message to one or more of an emergency response vehicle and a stationary emergency health care facility that are within a communication range of the communications device, and for receiving response messages from an emergency response vehicle and a stationary emergency health care facility that are available to provide emergency treatment for an emergency health condition indicated in the emergency alert message; a health condition controller to select an emergency response vehicle and a stationary emergency health care facility, as a provider, from among the emergency response vehicles or the stationary emergency health care facilities that sent the response messages; a global machine learning device configured to: coordinate with the local machine learning device and perform a federated learning process in which: the local machine learning device is configured to update local parameters, and send, using the communications device, the updated local parameters to the global machine learning device, wherein the global machine learning device is further configured to: aggregate updates of local parameters from different local machine learning devices, including the local machine learning device and use the aggregated updated parameters to train the global machine learning device, and share global parameters of the global machine learning device with the different local machine learning devices; wherein when the provider is a stationary emergency health care facility, the subject vehicle sets a destination to a provider health care facility and transmits, via the communications device, health status information of the user to the provider health care facility, and wherein when the provider is an emergency response vehicle, the subject vehicle communicates coordinates as a meeting destination for meeting a provider response vehicle, and transmit, via the communications device, the health status information of the user to the provider response vehicle.
2. The system of claim 1, wherein the local machine learning device is a long short-term memory deep learning network.
3. The system of claim 1, further comprising: an in-vehicle sub-system that includes the local machine learning device, wherein when the local machine learning device predicts an abnormality in the health status that indicates an impending emergency health issue, a health alert is issued that triggers an out-vehicle sub-system that uses the long-range communications device to broadcast the emergency alert message.
4. The system of claim 3, wherein the local machine learning device of the in-vehicle sub-system is a component of a mobile device, and wherein the mobile device includes a short-range wireless communications device for receiving the measurements by the at least one sensor.
5. The system of claim 1, wherein global machine learning device and the local machine learning device are configured to perform the federated learning process with data captured by Internet of Things (IoT) devices to predict the health abnormalities including fatigue, shock, losing consciousness, impending heart attack, impending heart failure, shortness of breath, or brain damage.
6. The system of claim 1, wherein the sensors include a plurality of ECG sensors mounted in a steering wheel of the subject vehicle, wherein the plurality of ECG sensors include one ECG sensor for placement of one finger of the user and a second ECG sensor for placement of a second finger of the user, in order to obtain two ECG signals.
7. The system of claim 6, wherein the one ECG sensor and the second ECG sensor each have a width to accommodate one finger.
8. The system of claim 6, wherein the plurality of ECG sensors are mounted equally spaced around a parameter of the steering wheel, such that a continuous ECG signal is obtained as the steering wheel is rotated.
9. The system of claim 1, wherein the sensors include a wearable sensor device having a short-range wireless communications device for transmitting ECG signals to the local machine learning device.
10. The system of claim 1, wherein the subject vehicle is configured to self-drive, wherein when the provider is the stationary emergency health care facility, the subject vehicle sets destination to the provider health care facility and automatically drives to the provider health care facility, and wherein when the provider is the emergency response vehicle, the subject vehicle communicates coordinates as a meeting destination for meeting the provider response vehicle, and automatically drives to location coordinates of the provider response vehicle.
11. A non-transitory computer readable storage medium storing computer program instructions, which when executed by processing circuitry, performs a method comprising: measuring, via sensors, ECG, heart rate, skin color, motion, respiratory rate, and oxygen level of a user that is inside a subject vehicle; predicting, via a local machine learning device, abnormalities in health status of the user that are indications of an impending emergency health condition based on the measurements by the sensors and storing the abnormalities in the user's health status and associated sensor data in a distributed blockchain; transmitting, via a long-range communications device, an emergency alert message to one or more of an emergency response vehicle and a stationary emergency health care facility that are within a communication range of the communications device, and receiving response messages from an emergency response vehicle and a stationary emergency health care facility that are available to provide emergency treatment for an emergency health condition indicated in the emergency alert message; training the local machine learning device using a federated learning process in coordination with a global machine learning device, the federated learning process comprising: conducting a local training process to update local parameters of the local machine learning device; sending, using the communications device, the updated local parameters to the global machine learning device, which aggregates updates of local parameters from different local machine learning devices and uses the aggregated updated parameters to train the global machine learning device; and receiving aggregated parameters of the global machine learning device; selecting, by a health condition controller, an emergency response vehicle or a stationary emergency health care facility, as a provider, from among the emergency response vehicles or the stationary emergency health care facilities that sent the response messages; wherein when the provider is a stationary emergency health care facility, the subject vehicle sets a destination to the provider health care facility and transmits, via the communications device, health status information of the user to the provider health care facility, and wherein when the provider is an emergency response vehicle, the subject vehicle communicates coordinates as a meeting destination for meeting the provider response vehicle, and transmits, via the communication device, the health status information of the user to the provider response vehicle.
12. The computer readable storage medium of claim 11, further comprising: when the local machine learning device predicts an abnormality in the user's health status, a health alert is issued that triggers an out-vehicle sub-system that includes broadcasting the emergency alert message.
13. The computer readable storage medium of claim 12, wherein the local machine learning device of an in-vehicle sub-system is a component of a mobile device, the method further comprising: wirelessly receiving the measurements by the sensors via a short-range wireless communications device of the mobile device.
14. The computer readable storage medium of claim 11, further comprising: performing the federated learning process with data captured by Internet-of-Things (IoT) devices to predict health abnormalities including fatigue, shock, losing consciousness, heart attacks, heart failure, shortness of breath, or brain damage.
15. The computer readable storage medium of claim 11, further comprising: measuring the ECG by a plurality of ECG sensors mounted equally spaced around a parameter of a steering wheel of the subject vehicle, such that a continuous ECG signal is obtained as the steering wheel is rotated.
16. The computer readable storage medium of claim 11, further comprising: transmitting, via a communications device, an emergency alert message when the predicted abnormality in the user's health status is above a predetermined risk level.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) A more complete appreciation of this disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
DETAILED DESCRIPTION
(36) In the drawings, like reference numerals designate identical or corresponding parts throughout the several views. Further, as used herein, the words a, an and the like generally carry a meaning of one or more, unless stated otherwise. The drawings are generally drawn to scale unless specified otherwise or illustrating schematic structures or flowcharts.
(37) Furthermore, the terms approximately, approximate, about, and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10%, or preferably 5%, and any values therebetween.
(38) This disclosure relates to a Cooperative Health Intelligent Emergency Response System for autonomous vehicles (Avs) that allows the vehicle to act collaboratively with different Cooperative Intelligent Transport Systems (C-ITS) to respond to emergency alerts coming from an in-vehicle intelligent health monitoring system. The disclosure provides an advanced in-vehicle intelligent health monitoring system that employs federated learning to protect user privacy with wider accessibility. The disclosure provides a Cooperative Health Intelligent Emergency Response system that reduces the time for an individual suffering a health abnormality to receive emergency treatment. The disclosure provides a set of algorithms that implement a framework by applying deep federated learning and Vehicle-to-Many devices (V2X) connectivity. The disclosure provides a machine learning model that predicts an impending health issue. The disclosure provides a system and method that is optimized for the problem of selecting the best decision that minimizes the delay for a passenger suffering a health abnormality to get the treatment. The disclosure provides comprehensive experiments for both deep federated learning and vehicular networks to test and evaluate the system.
(39) With the expected increase in the percentage of the population living in urban areas to about 60% of the global population by 2030, there has been an increasing interest in creating smart cities to improve the quality of life. According to forecasts, investments in smart cities are expected to reach $158 billion by 2022. A smart city employs information and communication technologies over connected objects in an urban area to improve its operations. This connected-objects arrangement uses wireless technologies to transmit data collected from citizens, buildings, devices, and vehicles through the IoT.
(40) IoT has been integrated into today's lifestyle through connecting everything to almost everything, including smartphones, smart buildings, smart homes, as well as healthcare wearable devices. Moreover, IoT sensors and devices have contributed toward the improvement of healthcare systems by facilitating the health workflow, speeding up access to medical records, increasing the accuracy of collected data from different sources, sharing capabilities, as well as fighting pandemics. According to reports published by the U.S. Institute of Medicine that medical errors claiming the lives of 400,000 people each year due to issues related to data inefficiency. More specifically, it is the inability to access a patient's medical history, missed and delayed diagnoses, or corrupted health data that are often a proximal cause for such deaths. IoT technological advancements have significantly influenced the healthcare system in connecting it to the user's personal device which can capture, store, and notify health providers of the relevant health data in real time and thereby increase effective health support and reduce the mortality rate.
(41) Autonomous Vehicles (AVs) are another key technology that plays a game-changing role in smart cities and intelligent transportation systems. An AV is a vehicle that can drive itself without human intervention or control by sensing its environments. AVs are equipped with IoT devices, sensors, and machine learning capabilities to operate safely on the roads with functions such as, auto cruise control, blind-spot detection, lane departure detection, lane keep assist, parking assist, automatic braking, etc.
(42) Using IoT devices, AVs can be connected to anything in vehicle-to-everything (V2X) related, including vehicle-to-network (V2N), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P), vehicle-to-cloud (V2C), and vehicle-to-device (V2D). These different forms of communications enable all elements to coordinate their actions by interacting together through the sharing of information, and this is the domain of Cooperative Intelligent Transport Systems (C-ITS). See Cooperative, Connected and Automated Mobility (CCAM). Accessed: Apr. 20, 2021. [Online]. Available: ec.europa.eu/transport/themes/its/c-its_en, incorporated herein by reference in its entirety.
(43) C-ITS aims to improve road safety, traffic efficiency, provide driving comfort, reduce fuel and energy consumptions, reduce harmful emissions, reduce travel time, adapt to different situations, and facilitate transportation access for people with disabilities and the elderly, etc.
(44) While the AV in C-ITS can monitor itself and the surrounding environment and communicate with other elements of the intelligent transportation systems at the same time, in some embodiments, the AV can also monitor and communicate with passengers inside the vehicle through IoT devices integrated into the AV system. The monitoring and communication aim to enable the vehicle to assist the passengers and meet their needs.
(45) A passenger health monitoring system is an important application for ITS as the vehicle will be aware of the passenger's health condition, and can react according to it to save driver's life. In Germany, a study found that around 24% of highway accidents are caused by fatigue. To migrate to C-ITS from ITS, the vehicle communicates with other elements of the ITS which includes other vehicles and roadside units (RSUs) in V2V and V2I connections to better handle the situation and increase the chances of saving the passenger's life by performing a Cooperative Health Emergency Response. Roadside units (RSUs) are non-mobile units, while vehicles are mobile. In AVs, data collected from health monitoring IoT devices embedded in the system such as ECG, heart rate, skin color, motion, respiratory rate, oxygen levels, and other data that can be used in an Artificial Intelligence (AI) model. Provided such data, the AI model can be trained to predict combinations of abnormalities in the passenger's health status that indicate an impending health emergency condition. Conditions such as fatigue, shock, losing consciousness, heart attacks, heart failure, shortness of breath, or even brain damage, are predicted using the AI model, and subsequently send alerts so that the vehicle can respond to save the passenger's life.
(46) There is a need for a system and method for cooperative vehicle behavior in health emergencies. The present disclosure is directed to a system and method for cooperative response for AVs when drivers and/or passengers have abnormal health conditions that indicate an impending emergency health issue. This approach not only deals with monitoring the passenger's health status, but also deals with the vehicle's interaction with the surrounding environment through V2V and V2I connections to respond to the driver's and/or passenger's impending health issue.
(47) An object is to reduce the total time to receive the first emergency treatment by an emergency treatment provider. Embodiments use a federated intelligent health monitoring system to protect user privacy and provide information security with wider accessibility, by adopting 5G cellular networks and local area networks. Embodiments include two phases: the first phase is the in-vehicle phase, and the second is the out-vehicle phase. Although the in-vehicle phase is described as occurring before the out-vehicle phase, embodiments include concurrent operation of the in-vehicle phase and the out-vehicle phase, allowing for much of the functions of the out-vehicle phase to be previously performed as needed when an alert is broadcast.
In-Vehicle Phase
(48)
(49) In some embodiments, some operations of the in-vehicle phase of the autonomous vehicle system 102 may be included in a mobile application that is performed in a portable mobile device, such as a smartphone, tablet computer, a wearable computing device, to name a few. In particular, operations including a data normalization 114, report generator 116, and a machine learning model 124 can be included in a mobile application (App). The computer program for the App may be stored in a computer readable storage medium, such as a flash drive or other non-volatile memory device. The portable mobile device may be configured with a wireless gateway 122, as well as include short-range wireless communication for communication with an in-vehicle network 118 and Internet of Things (IoT) sensor devices 112.
(50) In some embodiments, the machine learning model 124 is a local model that is part of the Federated Learning process. A Federated Learning process is a process of training many substantially equivalent machine learning models where local machine learning models train with local training data and each local machine learning model periodically shares its local parameters with a global machine learning model. The global machine learning model updates its parameters using the local parameters, then broadcasts the updated parameters back to the local machine learning models. The term federated comes from the use of a global machine learning model as the guiding parameters for the local machine learning models. The training data itself is kept secure with the local machine learning model. Any global training data is kept secure with the global machine learning model. In order to update a local machine learning model 124 continuously and make use of the data captured by the devices 112, the machine learning model 124 preferably goes through the following steps 1) Conduct a local training process using the captured data. 2) The model updates sent to a cloud service 130 that has a global model 132 through a cellular network 140 that includes a wireless gateway 122 inside the vehicle 102 connected to a wireless base station 136. In some embodiments, the cellular network may be any wireless communications network, including, but not limited to, 3G, 4G, 5G, 6G. 3) The cloud service 130 receives such updates from different vehicles registered with this service. 4) The updates are aggregated and used to train the global model 132. 5) The global model 132 is shared again with the vehicles so that the vehicles can use a more up-to-date model and the process continues.
(51) Moreover, the reports generated by the Vehicle System's Report Generator 116 will be sent to other vehicles 142 and road side units (RSUs) 144 in case of an emergency through V2V and V2I communications, and will also be stored in a cloud service 130, secure, and distributed blockchain 134 through the cellular network 140 to allow the passenger's healthcare professionals access these reports at any time and be aware of the passenger's case and health history. The purpose of using a blockchain database 134 is to provide information security for the user and protect the sensitive data from cyber-attacks and manipulations.
Out-Vehicle Phase
(52)
Framework
(53) To lay out the foundation of the analysis of formulating the cooperative intelligent transportation system, the following terms are defined; The decision variables
(54) ={1,2,3, . . . ,M},(2) The set of available vehicles
={1,2,3, . . . ,N},(3)
(55) An objective is to select the best decision that minimizes the delay to get the treatment. This objective can be expressed as;
(56)
(57) Note that D.sub.j.sup.i, d.sub.j.sup.i, and S.sub.j.sup.i are hospital i and vehicle j, having dependencies that are represented respectively: 1) D.sub.j.sup.i is other delay dependent of hospital and car (response time, waiting time, and others). 2) S.sub.j.sup.i is the estimated speed of car j to hospital i. 3) d.sub.j.sup.i is the travel distance by car j to hospital i.
(58) The formulation is a Bilevel Linear Programming problem (BLP) that can be written in a standard matrix form as:
(59)
(60) The system optimization is a BLP, which is nondeterministic polynomial-time (NP-hard) in general methods. However, the special structure of the problem reveals that it constitutes a uni-modular optimization, in which the optimal solution lies on the vertex of the optimization space, hence, Linear programming (LP) methods can be used to find the optimal solution in a polynomial time.
C-Healthier Algorithm Implementation
(61) The Cooperative Health Intelligent Emergency Response (also referred to as C-HEALTHIER) consists of two parts: in-vehicle (
(62)
(63) TABLE-US-00002 Algorithm 1 Federated Intelligent Health Monitoring System Input: D = Data Captured Output: Alert, mw, report, Process Success: S 1 ND = Normalized data, 2 mw = model weights. 3 lr = learning rate 4 function Normalization process( ) 5| for record D do 6|| ND = normalizer(record) 7|end 8end 9function Alert Prediction Process( ) 10|for record ND do 11|| label = local_model(record) 12|| if (label ! = 0) then 13||| return Alert 14||end 15|end 16end 17function Federated Local Training Process( ) 18|for record ND do 19||mw mw lr loss(mw, record) 20|end 21|return mw 22end 23function Report Generating Process( ) 24|for record ND do 25||report = report_generator(record) 26|end 27|return report 28end 29function Broadcasting Process( ) 30|
(64) Moreover, the normalized data 154 are used by the report generator 116 to generate a health status report 158 that is sent to the emergency treatment provider and broadcast to the blockchain database 134 to be stored there and become accessible for the respective health professionals.
(65)
(66) The computer-based control system 102 may be based on a microcontroller as the in-vehicle controller. A microcontroller may contain one or more processor cores (CPUs) along with memory (volatile and non-volatile) and programmable input/output peripherals. Program memory in the form of flash, ROM, EPROM, or EEPROM is often included on chip, as well as a secondary RAM for data storage. In one embodiment, the computer-based system 102 is an integrated circuit board 102 with a microcontroller 410. The board includes digital I/O pins 415, analog inputs 417, hardware serial ports 413, a USB connection 411, a power jack 419, and a reset button 421. Although a microcontroller-based board 102 is shown in
(67) The microcontroller 410 is a RISC-based microcontroller having flash memory 403, SRAM 407, EEPROM 405, general purpose I/O lines, general purpose registers, a real time counter flexible timer/counters, a A/D converter 409, and a JTAG interface for on-chip debugging. The microcontroller is a single SOC. The recommended input voltage is between 7-12V.
(68) In some embodiments, the microcontroller 410 includes a machine learning engine 431 for performing training and inference processing for a machine learning model.
(69)
(70) In some embodiments, the computer system 500 may include a server CPU and a graphics card, in which the GPUs have multiple cores. In some embodiments, the computer system 500 may include a machine learning engine 512.
(71)
(72) Vehicles 102 can build their own local machine learning model (622, 624, 626) to help automatically predict an emergency medical condition. However, a vehicle 102 by itself will typically not have sufficient number of training examples to train a deep learning network. Also, the training data involving medical information about a passenger may include private information. The federated learning system enables use of a larger number of training examples while keeping individual training examples in a secure storage. Only network parameters are shared. In the federated machine learning system, a centralized server maintains a global deep neural network 132 and each participating hospital 144 or other vehicle 142 would be given a copy of network parameters in order to train on their own dataset.
(73) In particular, once the local machine learning model has been trained locally for a small number of iterations, for example five iterations, the participants send their updated version of the network parameters back to the centralized server 130 and keep their dataset within their own secure infrastructure. Each vehicle includes its own privacy monitoring 612, 614, 616, and others. Each vehicle includes its own secure private data storage 632, 634, 636.
(74) The central server 130 then aggregates the contributions from all of the participants 124. In order to maintain security, only the updated network parameters are then shared with the participating vehicles, so that they will continue local training.
Local Training Process
(75) Personal health monitoring devices in the form of mobile applications or built-in sensors can actively monitor a user's vital health parameters, such as the electrocardiogram (ECG), blood pressure, heart rate, and the sugar level which reduces the potential errors of data recording. These devices can capture and transfer data anonymously to the cloud and compare it with historical data for symptoms of any illness or notify the appropriate health personnel (doctor, nurse, or health agent). Fewer errors mean better performance, cost, efficiency, and improvements in healthcare services where an error can literally be the difference between life and death. This intelligent context-aware IoT health era is made possible by the convergence of technology and healthcare. See X. Zhou, W. Liang, K. I. K. Wang, H. Wang, L. T. Yang, and Q. Jin, Deep-learning-enhanced human activity recognition for Internet of healthcare things, IEEE Internet Things J., vol. 7, no. 7, pp. 6429-6438, July 2020, incorporated herein by reference in its entirety. In turn, this convergence in technology can improve the quality of life and solve many challenges such as information sharing, diagnoses inefficiency, monitoring cost reduction, operations optimization, medication errors, etc.
(76) Digital twin (DT) technology refers to a digital replica of the physical object. DT combines artificial intelligence (AI), data analytics, IoT, and virtual and augmented reality paired with digital and physical objects. See I. Al Ridhawi, S. Otoum, M. Aloqaily, and A. Boukerche, Generalizing AI: Challenges and opportunities for plug and play AI solutions, IEEE Netw., early access, Sep. 30, 2020, doi: 10.1109/MNET.011.2000371, incorporated herein by reference in its entirety. This integration allows real-time data analysis, status monitoring, risk management, cost reduction, and future opportunities prediction.
(77) For healthcare systems, having a virtual replica of a patient could be an optimal solution for health promotion, increase control over health, and improve healthcare operations. DT can be integrating with healthcare to improve its processes and those of corresponding intelligent IoT healthcare systems that use the DT framework. The framework employs a real-time data set. This framework combines IoT, data analytics, and machine learning to make the patients' virtual replica a reality, give the healthcare professionals more capabilities to control and enhance a patient's health, and take in the cooperation of patients with similar cases into the process to utilize real-life scenarios. The framework comprises three phases: 1) processing and prediction phase; 2) monitoring and correction phase; 3) comparison phase.
(78) Each phase is responsible for improving an aspect of healthcare operations, the patient's aspect, the healthcare professional's aspect, or other patients with similar cases.
(79) Toward this objective, an ECG classifier diagnoses heart disease and detects heart problems. The machine learning classifier is trained using real-time data of ECG rhythms collected from different patients through sensing electrodes. The collected results are.
(80) A. Framework Phases
(81)
(82) Processing and Prediction Phase: The processing and prediction phase 706 is patient-centered and begins with capturing the patient data using IoT wearable sensors 722. These sensors 722 transfer real-time data of human body metrics that are important to monitor the health status and help in detection of anomalies. The transferred data is stored temporarily in a cloud database 724 responsible for raw-data storage. This data is used by a machine learning system in the training and prediction process. Through data analytics and machine learning capabilities, the DT framework builds classifiers and predictive models that detect health anomalies that are indications of an impending emergency health condition that is likely to occur using the raw-data after cleansing, preprocessing, and representation. The machine learning model results are stored in another scalable, secure, and immutable cloud database called the result database 702. The result database 702 is accessible by the patient and the other framework phases' components for continuous feedback, correction, and model optimization.
(83) Monitoring and Correction Phase: The monitoring and correction phase 708 requires intervention from the healthcare professionals from the patient domain. Healthcare professionals who provide treatments and advice based on formal training and experience use the results of the predictive models from the result database. This is along with clinical diagnosis and monitoring of the patient's health status to improve healthcare. By continuously feeding the predictive models with real-time data that help in detecting the body metrics anomalies, proactive monitoring and identification of impending emergency health issues is accomplished before they occur. Thus permitting the identification of the right treatments and helping healthcare professionals to design a better lifestyle for the patient. The professionals can correct, verify the results, and give informative feedback besides the read permissions of the result database, thus optimizing the model.
(84) Comparison Phase: The comparison phase 710 is the cooperation of patients with similar cases takes place in the third phase to utilize real-life scenarios, enhance patients' Digital Twin (DT), and thereby enhance the whole framework. By obtaining data and results of DTs for similar cases and/or patients, the model is able to compare the current patient results with other patients. This process expands the predictive models' domain by having real-life scenarios with more reliable results to improve models' accuracy. It also gives healthcare professionals the ability to make more advanced and accurate decisions. Decisions not only based on real-time data monitoring but also those that rely on past, present, and predicted future events for other patients, so that they can simulate, modify, or avoid other patients' experiences.
System Implementation
(85) The DT includes an ECG classifier that diagnoses heart disease and detects heart problems. The ECG classifier can be implemented as a deep learning model or using other machine learning models. In a preferred embodiment, the architecture and training method for the ECG classifier was chosen based on a performance evaluation comparison of five implemented models with different deep learning and traditional machine learning algorithms. Training of the ECG classifier may be performed offline using a real time training dataset, while inference operation may be performed in the cooperative health intelligent emergency response system during operation.
(86) The DT may be incorporated as part of the cooperative health intelligent emergency response system (also referred to herein as C-HealthIER system), which is implemented as a cooperative intelligent transport system (also referred to as C-ITS) (See
(87) The emergency vehicle 102 may transport a driver as well as one or more passengers. The emergency vehicle 102 may be one of a range of autonomous vehicle configurations. Six levels of driving automation have been defined including: Level 0, no driving automation; Level 1, driver assistance; Level 2, partial driving automation; Level 3, conditional driving automation; Level 4, high driving automation; Level 5, full driving automation. Level 5 vehicles are able to go anywhere and do anything that an experienced human driver can do. Level 4 vehicles human have the option to manually override. Thus, in most autonomous vehicles the driver is in control of driving or at least must remain alert and ready to take control if the system is unable to execute the task. The emergency vehicle 102 may be equipped with an embedded controller 410 for controlling the vehicle according to emergency conditions, may be augmented with emergency control functions performed in a portable mobile device, such as a smartphone or tablet computer, or may be supplemented with an independent portable mobile device that provides emergency alert services as a device that is not connected to an emergency vehicle.
(88) Sensors that provide sensor information to an emergency controller 410 may be sensors that are built into the emergency vehicle 102, may be health sensors worn by a driver or passenger, or may be a combination of vehicle built-in sensors and user wearable sensors. In this disclosure, the sensors are collectively referred to as Internet of Things (IoT) sensors 112.
(89) In some embodiments, IoT wearable sensors 722 include smart watches, sensors that can be worn on the upper arm, sensors that can be worn around the waist, sensors in gloves, as well as sensors that are built into a vehicles, such as sensors that are built into the steering wheel of a vehicle.
(90) In one embodiment, smartphone technologies may be used to obtain ECG data from a driver or passenger. A detector may be held by, or attached to, the user by at least two fingers, or strapped across the user's chest. The detector sends the data to the user's smartphone, where a mobile application records the data as an ECG.
(91) In a similar manner, a smartwatch or other wearable device may detect and monitor a wearer's heart rate. Heart rate data may be stored in the smartwatch or wearable device for later transfer, or may be continuously transferred to a health condition controller 410. Examples of wearable device include gloves or special vests equipped with sensors.
(92) A continuous glucose monitor (CGM) can be worn by a user and automatically checks blood sugar levels at timed intervals, for example, every five minutes. The CGM provides real-time data to a mobile application or health condition controller 410.
(93) In one embodiment, as in
(94) In one embodiment, the ECG sensors include one ECG sensor 801a for placement of one finger of the user and a second ECG sensor 801b for placement of a second finger of the user, in order to obtain two ECG signals. The one ECG sensor 801a and the second ECG sensor 801b each have a width to accommodate one finger.
(95) In some embodiments, an audio and/or visual display indication will be provided in order to inform a user that the ECG sensors 801a, 801b are measuring continuous ECG signals, or the sensors are not picking up heart rate or ECG signals. The audio and/or visual display indication may be provided by the vehicle display, such as an infotainment system, or may be generated by a mobile APP for output by the mobile device or transmitted for output by a vehicle display and audio output device.
(96) In one embodiment, as in
(97) In one embodiment, as in
(98) In some embodiments, when heart rate, ECG signals, or other body metrics are being measured by one sensor, such as the steering wheel mounted sensors 801a, 801b or a wearable device, but the measurement is cut off, the one or more cameras 911, 925 may be used to verify the health condition of the user. For example, the user may momentarily take their hand off of the steering wheel while heart rate is being measured. The one or more cameras 911, 925 will verify that the reason for removal of a hand(s) from the steering wheel was not due to an emergency medical condition of the user, such as fainting, heart attack, falling asleep, etc. In such case, the user may be presented with an audio or visual display indication to notify the user that heart rate, ECG signals, or other sensor measurements are not being made.
(99) In some embodiments, if a user has a wearable health sensor and the vehicle 102 has heart rate and ECG sensors, or other built-in health sensors, each of the sensors may make measurements as a primary and backup measurement, or one sensor may be switched over to another comparable sensor when a measurement has been cut off for the one sensor but the other comparable sensor can perform measurements.
(100) In order to apply the DT framework in a real-life scenario, a use case is a patient DT that monitors real-time health status and detects body metrics anomalies. The implemented use case boils down to build a machine learning model that diagnoses heart diseases and detects heart problems over predicting normal and abnormal heart rhythm. By capturing the ECG signals through wearable sensors or smartwatches, then convert them into a digital format. The collected data will be ready to train a machine learning algorithm after the preprocessing and filtering phase. After training, a predictive model for normal and abnormal rhythms that describe a particular heart condition is ready and in place.
(101) Accordingly, five models were built and trained using real-time ECG rhythms through various machine learning algorithms to test performance on the data set and obtain the best accuracy. The applied algorithms are convolutional neural network (CNN), multilayer perceptron (MLP), logistic regression (LR), long-short-term memory network (LSTM), and support vector classification (SVC).
(102) A. Data Set
(103) The data set used is based on the MITBIH Arrhythmia Database. See G. B. Moody and R. G. Mark, The impact of the MIT-BIH arrhythmia database, IEEE Eng. Med. Biol. Mag., vol. 20, no. 3, pp. 45-50, May/June 2001, incorporated herein by reference in its entirety. It contains 48 half-hour excerpts of two-channel ambulatory ECG recordings, obtained from 47 subjects studied by the BIH Arrhythmia Laboratory between the years of 1975 and 1979.
(104) Upper and lower signals were obtained by placing the electrodes on the chest, then the analog outputs of the playback unit were digitized at 360 Hz per signal relative to real time using the analog-to-digital converter (ADC) hardware constructed at the MIT Biomedical Engineering Center and at the BIH Biomedical Engineering Laboratory. The data set contains five classes as listed as follows. 1) N: Normal beat. 2) S: Supraventricular premature beat. 3) V: Premature ventricular contraction. 4) F: Fusion of ventricular and normal beat. 5) Q: Unclassifiable beat.
B. System Workflow
(105)
(106) Five different models were built to obtain the best accuracy for the data set. The parameters, performance, collected results, and evaluation for each applied model is described. An experiment was carried out using Python, Sklearn library, Tensorflow, and Keras. Also, other libraries were used to help in data preprocessing and results evaluation, such as Pandas, Numpy, and Matplotlib.
(107) Below, the main parameters of each used algorithm and the model structure are described in detail.
(108) First, in order to experience the potential of neural-network-based algorithms, the LSTM sequential model was constructed using LSTM and trained with a 0.01 learning rate over 10 epochs. The optimal model saved at epoch 5 with a minimum achieved validation loss of 0.1430, a 0.9709 validation accuracy, a 0.0329 training loss, and a 0.9896 training accuracy.
(109) Another neural-network-based model, the CNN model, was applied. This model was constructed using the CNN and trained with a 0.01 learning rate over 10 epochs. The optimal model saved at epoch 4 with a minimum achieved validation loss of 0.1391, a 0.9667 validation accuracy, a 0.0331 training loss, and a 0.9896 training accuracy.
(110)
(111) The MLP model was constructed using the MLP algorithm and achieved 0.956 testing accuracy over 800 iterations. Also, two experiments were tested on traditional machine learning algorithms. The SVC model was constructed using the SVC algorithm and achieved 0.756 testing accuracy on the linear kernel. The last applied model was the LR. This model was constructed using the LR algorithm and achieved 0.676 testing accuracy over 900 iterations with saga solver.
(112) A. Evaluation
(113) Next the evaluation metrics used to compare the applied model performance and choose the optimal one is described. 1) Accuracy: Accuracy, as shown in (1), gives the percentage of correctly predicted samples which are the true positives (TPs) and true negatives (TNs) out of all data samples (TPs, TNs, false positives (FPs), and false negatives (FNs)). It measures how often the algorithm correctly classifies a data sample
(114)
(115)
(116) TABLE-US-00003 TABLE II LSTM MODEL CONFUSION MATRIX Classes N S V F Q N 17,682 291 71 59 15 S 73 467 8 7 1 V 37 10 1,375 24 2 F 11 1 7 143 0 Q 14 1 5 0 1,588
(117) TABLE-US-00004 TABLE III CNN MODEL CONFUSION MATRIX Classes N S V F Q N 17,584 327 97 69 41 S 64 476 9 5 2 V 29 10 1,370 31 8 F 10 2 8 142 0 Q 11 3 3 0 1,591
(118) TABLE-US-00005 TABLE IV MLP MODEL CONFUSION MATRIX Classes N S V F Q N 17,430 393 165 62 68 S 93 443 14 3 3 V 50 13 1,351 26 8 F 16 4 7 134 1 Q 17 7 8 0 1,576
(119) TABLE-US-00006 TABLE V SVC MODEL CONFUSION MATRIX Classes N S V F Q N 13,456 869 2,241 1,207 345 S 142 360 28 24 2 V 150 32 1,147 99 20 F 11 0 8 143 0 Q 69 6 57 14 1,462
(120) TABLE-US-00007 TABLE VI LR MODEL CONFUSION MATRIX Classes N S V F Q N 11,790 2,085 2,369 1,415 459 S 125 370 31 20 10 V 158 53 1,038 153 46 F 11 0 9 142 0 Q 49 5 73 13 1,468 a) True positives: A TP value is considered when the model correctly predicts the positive class.
(121)
(122) TABLE-US-00008 TABLE VII LSTM MODEL CLASSIFICATION REPORT Metrics Classes precision recall f1-score N 0.99 0.98 0.98 S 0.61 0.84 0.7 V 0.94 0.95 0.94 F 0.61 0.88 0.72 Q 0.99 0.99 0.99 macro avg 0.83 0.93 0.87 weighted avg 0.98 0.97 0.97
(123) TABLE-US-00009 TABLE VIII CNN MODEL CLASSIFICATION REPORT Metrics Classes precision recall f1-score N 0.99 0.97 0.98 S 0.58 0.86 0.69 V 0.92 0.95 0.93 F 0.57 0.88 0.69 Q 0.97 0.99 0.98 macro avg 0.81 0.93 0.86 weighted avg 0.97 0.97 0.97
(124) TABLE-US-00010 TABLE IX MLP MODEL CLASSIFICATION REPORT Metrics Classes precision recall f1-score N 0.99 0.96 0.98 S 0.52 0.8 0.63 V 0.87 0.93 0.9 F 0.6 0.83 0.69 Q 0.95 0.98 0.97 macro avg 0.79 0.9 0.83 weighted avg 0.96 0.96 0.96
(125) Precision, as shown in (2), is the accuracy of positive predictions. The SVC and LR models show a low precision scores for S, V, and F classes which indicates a large number of FPs
(126)
(127) TABLE-US-00011 TABLE X SVC MODEL CLASSIFICATION REPORT Metrics Classes precision recall f1-score N 0.97 0.74 0.84 S 0.28 0.65 0.39 V 0.33 0.79 0.47 F 0.1 0.88 0.17 Q 0.8 0.91 0.85 macro avg 0.5 0.79 0.55 weighted avg 0.89 0.76 0.8
(128) TABLE-US-00012 TABLE XI LR MODEL CLASSIFICATION REPORT Metrics Classes precision recall f1-score N 0.97 0.65 0.78 S 0.15 0.67 0.24 V 0.29 0.72 0.42 F 0.08 0.88 0.15 Q 0.74 0.91 0.82 macro avg 0.45 0.76 0.48 weighted avg 0.88 0.68 0.74
(129) Recall, as shown in (3), is the fraction of positive samples that were correctly identified from the actual positives. The SVC and LR show a lower recall score, indicating many FN values, compared to neural-network-based models, but all scores are considered good scores
(130)
(131) F1-Score, as shown in (4), is the harmonic mean of Precision and Recall. The SVC and LR models show a low F1-score for S, V, and F classes. Also, neural-network-based models show a lower F1-Score for S and F compared to other classes, which means the models do not perform very well in predicting these classes
(132)
(133)
Weighted Avg.=(Sm)(6) where s=percentage of samples for each class from total samples.
(134)
(135)
(136) The experiments show that the neural-network-based models perform better than traditional machine learning algorithms in terms of evaluation metrics. For accuracy, the LSTM sequential model achieved the highest accuracy score with 0.97. Also, the deep NN models achieved higher scores than SVC and logistics regression models.
(137) The confusion matrices showed that the traditional algorithms (SVC and logistics regression) misclassified some classes with higher FP and FN values than neural-network-based algorithms.
(138) Precision, Recall, and F1-score metrics showed that neural-network-based algorithms achieved higher scores than other models, taking into consideration the macro and weighted average results for each metric. Furthermore, they showed that the LSTM sequential model achieved 0.83, 0.93, and 0.87 macro average for Precision, Recall, and F1-score, respectively, and 0.98, 0.97, and 0.97 weighted average for Precision, Recall, and F1-score, respectively, which were the highest scores across all models. Finally, the area under the curve (ROC) showed that all models had a high AUC score above 80 for all classes which means all models can distinguish between classes.
(139) Federated learning is used to train the DT for health issue prediction in order to improve trust, security and privacy, standardization, diversity and multisourcing.
(140) Trust
(141) The concept of the DT itself of having a virtual replica for a physical asset may have a gap since it relies on devices to transfer the data, while these devices may be crashed or disconnected for any reason. Also, DTs typically require a contribution from field professionals. These professionals must be qualified and ethical to give accurate feedback, edit and preserve data.
(142) Security and Privacy
(143) Protecting the DT systems from unauthorized access, abuse, modification, or disclosure will be a challenge as in any other information system. As DT systems process large volumes of sensitive and personal data, this will make it a target for threat actors and cyberattacks. In addition, the use of the IoT devices and sensors may add more complexity in terms of implementing proper security as the traditional security controls mostly do not fit with them. Processing personal user's data could raise regulatory risks. Complying with privacy regulations such as GDPR in Europe or regulations at relevant national protection laws could be a mandatory and adding more challenges when designing DT systems.
(144) Standardization
(145) Lack of standards is another critical challenge. This factor affects security, privacy, interactions, roles, contribution protocols, data transmission, and synchronization between the virtual and physical world. Setting global standards would help to spread the trend of DTs more rapidly and make it a reality faster.
(146) Diversity and Multisourcing
(147) Another problem facing DTs is the data diversity and their multiple sources. This occurs due to the different sources through which data is captured, also the diversity of data types. Which causes problems in processing and building machine learning models as this data is heterogeneous.
(148) A combination of federated learning and distributed blockchain are used to build trust and increase security and privacy. Federated learning reduces the amount of data exchange, particularly the exchange of health-related data, between devices. Individual's private health-related data is securely maintained locally, while health emergency reports are maintained securely in a distributed blockchain.
(149) In embodiments, actions performed by the out-vehicle phase 200 are triggered by an emergency alert generated by the in-vehicle phase. After predicting an abnormal health status that indicates a high risk of a health condition that would require emergence medical assistance, the in-vehicle system will begin its process. Algorithms 2 and 3 will work simultaneously as the vehicle will broadcast an emergency alert 162, 164 to all surrounding vehicles 142 and RSUs 144 requesting an emergency treatment provider. Upon receiving the emergency message, vehicles 142 and RSUs 144 that can provide emergency treatment will broadcast their position coordinates.
(150) TABLE-US-00013 Algorithm 2 Vehicle Emergency Response System Input: Alert Output: min_distance, closest_Coords, S_I D, S_type, emergency_message( ), health_report_message( ) 1 function emergency_message( ) 2|if (Alert) then 3||
(151) TABLE-US-00014 Algorithm 3 Surrounding AVs and RSUs Input: emergency_message( ), set_parking_message( ) Output: Process Completed: P 1 function emergency response( ) 2|if (AV || RSU receiveemergency_message( )) then 3||if (RSU is hospital) then 4|||
(152)
(153) The vehicle emergency response system, as in
(154) In some embodiments, before step S2114, where the vehicle 102 broadcasts an emergency message, the driver or passenger may be provided with a means to override the automatic broadcast of an alert message. For example, the vehicle 102 or mobile app of a smartphone may be provided with a message asking for authorization to send an alert message. In certain circumstances, the alert message may be automatically broadcast when an emergency health condition has a high likelihood of being life-threatening. A life-threatening condition may be one that the driver is unable to drive the vehicle 102, or one that the driver or passenger has a high risk of death.
(155) In some embodiments, the driver or passenger may be provided with an interactive interface, such as in a touch display in the vehicle 102, or in a user interface of a mobile app, that the driver or passenger can pre-register. The driver or passenger may submit pre-registration information that identifies the person, as well as provide pre-existing health conditions, medications being taken, and physical ailments that may impair the driver or passenger while in a vehicle, or proximate to the vehicle. The sensor data obtained from IoT devices 112 may be configured to focus on monitoring of the pre-existing health conditions. In some embodiments, the vehicle emergency response may set the broadcast of an emergency message based on a risk level of a driver, such as setting a predetermined threshold, which when exceeded will automatically broadcast the emergency message in S2114.
(156) When the vehicle 102 with the impending emergency case receives the position messages from other vehicles, it will calculate the distance to each emergency treatment provider and select the closest. Also, it will save the provider type, whether it is RSU 144 or vehicle 142, and its ID. In some embodiments, the vehicle 102 will perform a monitoring function to periodically ping nearby ambulances or health care facilities to obtain availability and position data. In such case, the vehicle 102 may maintain information of position and distance of potential providers in advance of a health emergency.
(157) Depending on the type of provider, the vehicle 142 will perform the next action.
(158)
(159) In S2310, the last action will be carried out by the other vehicle, as, in S2312, it will set parking at the coordinates (x,y) sent by the vehicle with the emergency case to provide the treatment there.
(160) To evaluate the performance of C-HealthIER, it was applied to existing scenarios for rescuing a vehicle's driver who has an abnormal health condition that makes him unable to continue driving until he receives emergency treatment. The evaluation was performed by simulating two scenarios: the C-HealthIER system and the Autopilot mode for the self-driving car.
(161) Simulation Setup
(162) The simulation was performed using a vehicle network simulation framework: Veins (Vehicle in Network Simulation). Veins is a framework used to create vehicle networks by simulating road traffic with SUMO and wireless communication with OMNet++. OMNeT++ is an event-based network simulator, and SUMO is a road traffic simulator. Simulation environment specifications and parameters are mentioned in Table II and Table III.
(163) TABLE-US-00015 TABLE II SIMULATION ENVIRONMENT SPECIFICATIONS Software Hardware Veins 5.1 RAM 16 GB DDR4 SUMO 1.8.0 Processor Intel i7-10750H Omnet++ 5.6.1 CUP @ 2.60 GHz 6
(164) TABLE-US-00016 TABLE III SIMULATION PARAMETERS Parameter Value Communication Standard IEEE 802.11p Vehicle Length 2.0 Vehicle Speed 1.0 Vehicle Acceleration 1.0 Simulation Time Limit 3000 s RSU Position X, Y, Z 280, 230, 3 Parking Area Position X, Y, Z 300, 26, 3 Vehicles that provide health emergency car.node[2], car.node[7] Bitrate 6 Mbps Header Length 80 bi
Deep Federated Learning
(165) A deep federated learning model was built using MIT-ECG dataset on a pre-trained model via Tensorflow Federated Framework. The testing data was re-sampled and used as training and testing data for the federated learning process.
(166) The federated learning process was run for 6 federated average rounds. Each round trained on 12000 data samples for 120 participants.
(167) This trained deep federated learning model is used to predict the health status abnormality by the vehicle system.
(168) The Simulation of C-HealthIER Scenarios
(169) In this scenario, the V2V and V2I wireless network and communication were implemented through wireless access in a vehicle environment (WAVE) IEEE 802.11p standard.
(170) At the second 27 of the simulation time, the car.node[0] will receive a health emergency alert sent by the intelligent health monitoring system for predicting an abnormality before it occurs. Based on the alert, the vehicle will handle self-massage to broadcast a short wave message (WSM) to all car nodes and RSU nodes. In response, car.node[2], car.node[7], and rsu.node[0] will broadcast a WSM message that contains their coordinates.
(171) When car.node[0] receives the WSM messages containing the coordinates, it calculates the driving distance to each node. Depending on which type of node has the shortest distance, car.node[0] will take the next action.
(172) Various cases were performed on this simulation to test car.node[0] actions. In one case (Car-node case), the car node[2] was the nearest to car.node[0], so the car.node[0] broadcast parking instructions to it and based on that, the car node[2] set parking at the same position. In another case (RSU-node case), rsu.node[0] was the nearest to car.node[0], so the car.node[0] changed its destination to the rsu.node[0] position.
(173) The Simulation of Autopilot Mode for Self-Driving Car
(174) This simulation implements the Autopilot mode response for self-driving cars if the driver has an abnormal health condition that prevents him from continuing to drive, loses consciousness, and thus loses control of the vehicle.
(175) According to Tesla, if the driver doesn't touch the steering wheel for three alerts, the driver will be banned from using Autopilot during the trip and then the car will automatically stop after 60 seconds, so the car will need 60 seconds to stop. See Autopilot and Full Self-Driving Capability. Accessed: May 20, 2021. [Online]. Available: www.tesla.com/, incorporated herein by reference in its entirety.
(176) In this scenario, the driver of car.node[0] loses consciousness at the second 20 and takes his hands off the steering wheel. car.node[0] will stop after 60 seconds. Then car.node[8] ambulance will depart from the rsu.node[0] hospital to stop at the same position as car.node[2] to take the driver to the hospital. In the simulation, the average emergency response time in the USA of 15 minutes (900 seconds) was used for car.node[8] to respond to the emergency case.
(177) The average waiting time in a hospital's emergency department depends on the emergency case. In the UK, the average wait time for the patient to be seen and treated in the emergency department is 77 minutes in 2021. Therefore, the waiting time to receive emergency treatment must be added to the total simulation time.
(178) In an alternative Autopilot mode, a fully autonomous vehicle can drive without driver intervention. In such case, when the provider is an emergency hospital, the car will set its destination to the provider hospital and will automatically drive to the provider hospital. When the provider is an emergency response vehicle, the car will communicate coordinates as a meeting destination for meeting the provider response vehicle, and will automatically drive to location coordinates of the provider response vehicle.
(179) The Simulation of Traffic Density
(180) Multiple scenarios were implemented at different traffic density levels to verify the effect of traffic on C-HealthIER's approach and AutoPilot mode. Table IV shows the simulation environment parameters: the time when the health emergency occurs, total cars, and the corresponding traffic density level in each scenario. Low, Medium, and High density levels were considered based on the road capacity.
(181) TABLE-US-00017 TABLE IV TRAFFIC DENSITY SIMULATION ENVIRONMENT Emergency Traffic Occurrence Total Density Scenario Time Cars Level AutoPilot mode 20 50 Low C-HealthIER 84 64 Low Car-node case C-HealthIER 84 64 Low RSU-node case) AutoPilot mode 20 135 Medium C-HealthIER 117 135 Medium Car-node case C-HealthIER 117 135 Medium RSU-node case AutoPilot mode 20 340 High C-HealthIER 244 382 High Car-node case C-HealthIER 244 382 High RSU-node case
The Simulation of Two Emergency Cases in One Scenario
(182) Two emergency cases in one scenario were also tested in the simulation under two different scenarios. In scenario 1, both car.node[0] and car.node[1] have passenger health emergencies at a close time and the two vehicles car.node[2] and car.node[7] will provide emergency treatment. When the emergency case occurs at car.node[0] both of them will respond to the help message, while when the emergency case occurs at car.node[3] only one vehicle (car.node[7]) will respond to the help message as the other vehicle (car.node[2]) is occupied to help car.node[0] passenger.
(183) In scenario 2, only the rsu.node[0] will be available to provide the healthcare emergency treatment, both car.node[0] and car.node[1] will send help messages to rsu.node[0] and rsu.node[0] will respond to the message. Then both vehicles will change the direction to rsu.node[0] to receive the treatment there.
(184) As stated before, one aspect of the framework is to minimize the total time to receive the first emergency treatment for maximizing rescue chances for the AV passenger.
(185) Deep Federated Learning Model:
(186) First Emergency Treatment Time (FETT): The total scenario time is the time that the simulation takes until the last action is executed. As shown in
(187) Total Emergency Treatment Time: Since the goal was to maximize the passenger's rescue chances, the full time to receive the first emergency treatment time should be considered, including waiting time if any. Accordingly, the waiting in the emergency department should be added to FETT. Table VI shows the waiting time that should be considered to have the Total Emergency Treatment Time.
(188) TABLE-US-00018 TABLE V DFR CLASSIFICATION REPORT Round Average precision Recall F1-score DFR.sub.1 macro avg 0.93 0.93 0.93 weighted avg 0.93 0.93 0.93 DFR.sub.2 macro avg 0.94 0.93 0.93 weighted avg 0.94 0.93 0.93 DFR.sub.3 macro avg 0.94 0.93 0.93 weighted avg 0.94 0.93 0.93 DFR.sub.4 macro avg 0.94 0.94 0.94 weighted avg 0.94 0.94 0.94 DFR.sub.5 macro avg 0.94 0.94 0.94 weighted avg 0.94 0.94 0.94 DFR.sub.6 macro avg 0.94 0.94 0.94 weighted avg 0.94 0.94 0.94
(189) TABLE-US-00019 TABLE VI TOTAL EMERGENCY TREATMENT TIME Waiting First Total Emergency Scenario FETT time Treatment Time AutoPilot 1782 second 4620 seconds 6402 seconds mode C-HealthIER 477 seconds 0 seconds 477 seconds RSU-node case C-HealthIER 300 seconds 0 seconds 300 seconds Car-node case
(190) Based on the UK statistics of the average waiting time to receive the treatment in the emergency department, the Autopilot mode scenario took 6402 seconds to the first emergency treatment, as 4620 seconds of waiting time were added to 1782 seconds as the first emergency treatment time. C-HealthIER scenarios did not require a wait time as a passenger's health status report was sent prior to the arrival to conduct any rescue procedures, allowing the healthcare provider to prepare ahead of time on how to handle the case. Moreover, the intelligent health monitoring system predicts the abnormality condition before it occurs, so the passengers' condition is not severe yet, requiring less effort to deal with the case.
(191) Travel Time to First Emergency Response: The total travel time to the first emergency response is when a health emergency occurs up to the time the vehicle arrives for the first emergency treatment. As shown in
(192) Travel Distance to First Emergency Treatment: It is the distance the vehicle travels from the moment the health emergency occurs until it arrives at the first emergency treatment place
(193) End-To-End Delay: This is the time that was taken for a packet to be transmitted from its source to its destination.
(194)
(195)
(196) Traffic Density: The First Emergency Treatment Time (FETT) and Travel Time were also tested on different traffic density levels. The results are illustrated in (
(197) Two Emergency Cases in One Scenario: The FETT and the Travel Time were calculated for these simulation scenarios as shown in Table VII. The results were influenced by the distance between vehicles and different parking areas in scenario 1. Overall, both scenarios achieved FETT and travel time less than FETT and travel time achieved by the normal Autopilot mode scenario for a single vehicle.
(198) TABLE-US-00020 TABLE VII TWO EMERGENCY CASES IN ONE SCENARIO RESULTS Emergency Emergency Travel Treatment Occurrence FETT Time Scenario Node Provider Time in Sec in Sec Scenario 1 car.node[0] car.node[2] 25 300 276 car.node[1] car.node[7] 27 350 360 Scenario 2 car.node[0] rsu.node[0] 25 457 432 car.node[1] rsu.node[0] 27 457 430
(199) Towards reducing the time of receiving the emergency treatment for AVs' passengers with abnormal health conditions within a C-ITS environment the present disclosure describes a cooperative health intelligent emergency response system for AVs that use deep federated learning, V2V and V2I connections to detect abnormal health status of a passenger and react accordingly. Furthermore, the present disclosure implements a set of algorithms in various simulated scenarios to validate the system performance.
(200) In summary of the simulation results, C-HealthIER reduces the TMTT by 92.5% and 95.3% compared to the AutoPilot mode total time. Also, C-HealthIER reduces FETT by 73.2% and 83.1% compared to the AutoPilot mode FETT. For the two scenarios, the C-HealthIER reduces the travel distance by 40.9% and 63.2%, thus, the travel time is reduced by 43.8% and 65.9% compared to the AutoPilot mode. For more collected results at different levels of traffic density, the C-HealthIER still achieves the lowest FETT and travel time compared to the Autopilot mode even during the high-level density.
Alternative Intelligent Cooperative Health Emergency Response Algorithm
(201) In an alternative embodiment, to implement the framework, the intelligent healthcare monitoring system must first operate independently to monitor the health status using the IoT devices and predict the abnormalities in health status using the AI model, and then interact with the emergency response system.
(202)
(203)
(204)
(205) TABLE-US-00021 Algorithm 1: Cooperative Emergency Response Input: Alert, RSUs, AVs, AV.sub.0 Output: Response Success: S 1 function emergency response( ) 2|nearest_Coord = (x,y), min_distance = 99999 3|S_ID = xxx, S_type = xxx 4|If (Alert ) then 5||
(206)
(207) Numerous modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.