MULTI-AREA ARTIFICIAL FOG PIPE NETWORK INTELLIGENT CONTROL METHOD AND SYSTEM BASED ON YOLOV5 ALGORITHM
20230083027 · 2023-03-16
Inventors
- BIN YANG (TIANJIN, CN)
- RUIQI GUO (TIANJIN, CN)
- XINGRUI DU (TIANJIN, CN)
- Yuyao Guo (Tianjin, CN)
- DACHENG JIN (TIANJIN, CN)
- BINGAN PAN (TIANJIN, CN)
Cpc classification
F24F2110/22
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F11/30
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F11/64
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F2120/12
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F5/0035
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
F24F11/30
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F11/64
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
Abstract
A multi-area artificial fog pipe network control method and system based on a YOLOv5 algorithm are provided. The method includes: obtaining thermal sensation data of each target person based on facial skin temperature; calculating group thermal sensation data of each subarea and total group thermal sensation data of an artificial fog pipe network area; determining a total flow of fog-making water introduced into the artificial fog pipe network according to target number of people and total group thermal sensation data; controlling opening gears of atomization nozzles on the artificial fog pipe networks in subareas according to a number of the target person in each subarea, the group thermal sensation data and a micro-action type of each target person. The method can realize purposes of saving energy, reducing emission, accurately controlling the flow of the fog-making water, and the people-oriented aim and outdoor group heat comfort maximization.
Claims
1. A multi-area artificial fog pipe network control method based on a YOLOv5 (abbreviation for you only look once version 5) algorithm, comprising: S1, obtaining an air dry bulb temperature and an air relative humidity in an artificial fog pipe network area; S2, obtaining video information of the artificial fog pipe network area, and dividing the artificial fog pipe network area to obtain subareas; obtaining, in the video information, a total number of at least one target person in the artificial fog pipe network area, location information of each of the at least one target person and a number of the target person in each of the subareas by using the YOLOv5 algorithm; obtaining, in the video information, a facial skin temperature of each of the at least one target person in the artificial fog pipe network area by using a Eulerian video magnification algorithm; and obtaining, in the video information, a micro-action type of each of the at least one target person in the artificial fog pipe network area by using a skeleton node algorithm; S3, obtaining thermal sensation data of each of the at least one target person based on the air dry bulb temperature, the air relative humidity, and the facial skin temperature of each of the at least one target person in the artificial fog pipe network area; and calculating group thermal sensation data of each of the subareas and total group thermal sensation data of the artificial fog pipe network area based on the thermal sensation data of each of the at least one target person; and S4, determining a total flow of fog-making water based on the total number of the at least one target person and the total group thermal sensation data; and controlling an opening gear of an atomizing nozzle on an artificial fog pipe network in each of the subareas based on the number of the target person in each of the subareas, the group thermal sensation data of each of the subareas, and the micro-action type of the target person in each of the subareas and spraying the artificial fog pipe network area.
2. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein the step S1 further comprises: comparing the air dry bulb temperature with a temperature threshold; determining whether there is the target person in the artificial fog pipe network area, in response to the air dry bulb temperature is equal to or greater than the temperature threshold; and executing the step S2 in response to there is the target person.
3. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein the obtaining, in the video information, a total number of at least one target person in the artificial fog pipe network area, location information of each of the at least one target person and a number of the target person in each of the subareas by using the YOLOv5 algorithm, comprises: obtaining a total number of best prediction boxes in the video information by using the YOLOv5 algorithm to as the total number of the at least one target person; obtaining, in the video information, a location of a human face of each of the at least one target person in an original image to be detected to as the location information of each of the at least one target person by using the YOLOv5 algorithm; determining each of the subareas to which each of the at least one target person belongs based on a location boundary of each of the subareas and the location information of each of the at least one target person, and thereby obtaining the number of the target person in each of the subareas.
4. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein the micro-action type comprises: one selected from an overheating action and an overcooling action, the overheating action comprises one of wiping sweat, fanning with hands, shaking clothes and rolling up sleeves, and the overcooling action comprises one of rubbing hands, exhaling to warm hands, and holding hands.
5. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein a calculation formula of the thermal sensation data is that: TSV.sub.i=a+T.sub.air×K1+RH.sub.air×K2+t.sub.i×K3; where TSV.sub.i represents the thermal sensation data of the ith target person, K1, K2, K3 respectively represent linear parameters of a linear regression model, a represents an intercept, T.sub.air represents the air dry bulb temperature, RH.sub.air represents the air relative humidity, t.sub.i represents the facial skin temperature of the ith target person, and i is a positive integer.
6. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein the calculating group thermal sensation data of each of the subareas and total group thermal sensation data of the artificial fog pipe network area based on the thermal sensation data of each of the at least one target person, comprises: calculating the group thermal sensation data of each of the subareas based on the thermal sensation data of the target person in each of the subareas and the number of the target person in each of the subareas; and calculating the total group thermal sensation data of the artificial fog pipe network area based on the thermal sensation data of each of the at least one target person in the artificial fog pipe network area and the total number of the at least one target person in the artificial fog pipe network area; wherein a calculation formula of the group thermal sensation data is that:
TSV.sub.q=TSV.sub.1×a1+TSV.sub.2×a2+TSV.sub.3×a3+ . . . +TSV.sub.j×aj, where TSV.sub.q represents the group thermal sensation data, TSV.sub.1 represents the thermal sensation data of the target person numbered 1, a1 represents a thermal sensation weight of the target person numbered 1, TSV.sub.j represents the thermal sensation data of the target person numbered j, aj represents a thermal sensation weight of the target person numbered j, and j is a positive integer; and wherein a1+a2+ . . . +aj=1, and if there is no a special case, a1=a2= . . . =aj.
7. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein a calculation formula of the total flow of the fog-making water is that: Q.sub.total=X.sub.total×TSV.sub.qtotal×b+e, where Q.sub.total represents the total flow of the fog-making water, X.sub.total represents the total number of the at least one target person, TSV.sub.qtotal represents the total group thermal sensation data, b represents a linear regression fitting coefficient, and e represents an intercept.
8. The multi-area artificial fog pipe network control method based on the YOLOv5 algorithm according to claim 1, wherein the controlling an opening gear of an atomizing nozzle on an artificial fog pipe network in each of the subareas based on the number of the target person in each of the subareas, the group thermal sensation data of each of the subareas, and the micro-action type of the target person in each of the subareas, comprises: determining the opening gear of the atomizing nozzle on the artificial fog pipe network in each of the subareas based on the number of the target person in each of the subareas; and adjusting the opening gear of the atomizing nozzle on the artificial fog pipe network in each of the subareas based on the group thermal sensation data of each of the subareas and the micro-action type of the target person in each of the subareas; wherein the opening gear comprises first to fourth gears, and the determining the opening gear of the atomizing nozzle on the artificial fog pipe network in each of the subareas based on the number of the target person in each of the subareas, comprises: determining the opening gear of the atomizing nozzle on the artificial fog pipe network in the subarea is the first gear in response to the number of the target person in the subarea is equal to or less than 3; determining the opening gear of the atomizing nozzle on the artificial fog pipe network in the subarea is the second gear in response to the number of the target person in the subarea is in a range of 3 to 5; determining the opening gear of the atomizing nozzle on the artificial fog pipe network in the subarea is the third gear in response to the number of the target person in the subarea is in a range of 5 to 10; and determining the opening gear of the atomizing nozzle on the artificial fog pipe network in the subarea is the fourth gear in response to the number of the target person in the subarea is equal to or greater than 10; wherein the micro-action type comprises one selected from a supercooling action and a superheating action, and the adjusting the opening gear of the atomizing nozzle on the artificial fog pipe network in each of the subareas based on the group thermal sensation data of each of the subareas and the micro-action type of the target person in each of the subareas, comprises: obtaining a target opening gear of the atomizing nozzle on the artificial fog pipe network in each of the subareas by: in response to the group thermal sensation data of the subareas is less than or equal to −2, adjusting the opening degree gear of the atomizing nozzle on the artificial fog pipe network in the subarea to be decreased by 1 gear; in response to the group thermal sensation data of the subareas is greater than or equal to 2, adjusting the opening gear of the atomizing nozzle on the artificial fog pipe network in the subareas to be increased by 1 gear; in response to the supercooling action is detected in the subarea, adjusting the opening gear of the atomizing nozzle in the subarea to be decreased by 1 gear; in response to the superheating action is detected in the subarea, adjusting the opening gear of the atomizing nozzle in the subarea to be increased by 1 gear; and closing the atomizing nozzle on the artificial fog pipe network in response to the corresponding target opening gear is lower than or equal to 0; and determining the target opening gear to be the fourth gear, in response to the target opening gear is greater than 4.
9. A multi-area artificial fog pipe network control system based on a YOLOv5 algorithm, implementing the method according to claim 1, wherein the system comprises: an electronic thermostat system, a human monitoring system, a video monitoring and vision system, a central control system, a data storage system, an anthropomorphic learning system and a thermal environment feedback system; wherein the thermal environment feedback system is configured to collect thermal environment subjective questionnaires of the at least one target person in the artificial fog pipe network area through two-dimensional codes, and transmit the thermal sensation data to the data storage system and anthropomorphic learning system; wherein the anthropomorphic learning system is configured to obtain the thermal sensation data, and train a thermal sensation predictor in the central control system in response to a quantity of the thermal sensation data reaches 100; wherein the data storage system is configured to store data obtained by the electronic thermostat system, the human monitoring system, the video monitoring and vision system, the central control system, the anthropomorphic learning system and the thermal environment feedback system, and randomly transmit the data to other systems.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0051]
[0052]
[0053]
[0054]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0055] The disclosure will be further described below with reference to the accompanying drawings and specific embodiments.
[0056] As shown in
[0057] At the step S1, collecting an air dry bulb temperature T.sub.air and an air relative humidity RH.sub.air in an artificial fog pipe network area, if the air dry bulb temperature T.sub.air is equal to or greater than a temperature threshold T.sub.0, detecting whether there is a target person in the entire artificial fog pipe network area; if the air dry bulb temperature T.sub.air is lower than the temperature threshold T.sub.0, whether there is the target person in the area is not detected.
[0058] At the step S3, if there is the target person in the entire artificial fog pipe network area, collecting video information in the entire artificial fog pipe network area to obtain the video information.
[0059] At the step S4, dividing the artificial fog pipe network area based on a coverage of each of atomizing nozzles in the artificial fog pipe network area in the video information to obtain N number of subareas, and obtaining a location boundary of each of the subareas in the video information.
[0060] At the step S5, calculating a total number X.sub.total of the target person in the entire artificial fog pipe network area, location information of each target person and a number X.sub.i of the target person in each of the subareas by using the YOLOv5 algorithm;
[0061] calculating a facial skin temperature t.sub.i of each target person in the entire artificial fog pipe network area by using a Eulerian video magnification algorithm; and
[0062] calculating a micro-action type Actt of each target person in the entire artificial fog pipe network area by using a skeleton node algorithm.
[0063] Specifically, skin color saturation is obtained by Fourier transform through the Eulerian video magnification algorithm. Because there is a linear relationship between the skin color saturation and the skin temperature, the facial skin temperature t.sub.i of the target person can be obtained.
[0064] Specifically, the skeleton node algorithm is an OpenPose algorithm.
[0065] In an illustrated embodiment, the micro-action type Actt of each target person includes one selected from an overheating action and an overcooling action, and a spray flow can be automatically adjusted through the micro-action type Actt according to the skeleton node algorithm.
[0066] Specifically, the overheating action includes wiping sweat, fanning with hands, shaking clothes, rolling up sleeves, etc. The overcooling action includes rubbing hands, exhaling to warm hands, holding hands, etc.
[0067] At the step S5, converting the air dry bulb temperature T.sub.air, the air relative humidity RH.sub.air and the facial skin temperature t.sub.i of each target person obtained in the video into thermal sensation data TSV.sub.i of each target person through a linear regression machine learning model, and a calculation formula of the thermal sensation data TSV.sub.i is that: TSV.sub.i=a+T.sub.air×K1+RH.sub.air×K2+t.sub.i×K3.
[0068] Where TSV.sub.i represents the thermal sensation data of the ith target person; K1, K2, K3 respectively represent linear parameters of the linear regression machine learning model (i.e., linear regression model); a represents an intercept, and i is a positive integer.
[0069] Specifically, a standard of the thermal sensation data TSV is determined as a 7-point system according to the specification ASHRAE Standard 55-2020 formulated by ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers), and the specific divisions are shown in Table 1.
TABLE-US-00001 TABLE 1 Thermal sensation scale Thermal slightly slightly sensation cold cool cool neutral warm warm hot Thermal −3 −2 −1 0 1 2 3 sensation data TSV
[0070] At the step S6, converting the thermal sensation data TSV.sub.i of each target person into group thermal sensation data TSV.sub.qi of each subarea and total group thermal sensation data TSV.sub.qtotal of the entire artificial fog pipe network area. A calculation formula of the group thermal sensation data is that: TSV.sub.q=TSV.sub.1×a1+TSV.sub.2×a2+TSV.sub.3×a3+ . . . +TSV.sub.j×aj.
[0071] Where TSV.sub.q represents the group thermal sensation data;
[0072] TSV.sub.1 represents the thermal sensation data of the target person numbered 1;
[0073] a1 represents a thermal sensation weight of the target person numbered 1;
[0074] TSV.sub.j represents the thermal sensation data of the target person numbered j;
[0075] aj represents a thermal sensation weight of the target person numbered j.
[0076] Note: a1+a2+ . . . +aj=1, j is a positive integer, and if there is no special case, a1=a2= . . . =aj.
[0077] In an illustrated embodiment, the group thermal sensation data TSV.sub.qi of each subarea is transformed from the thermal sensation data TSV.sub.1 of the target person in each subarea; the total group thermal sensation data TSV.sub.qtotal of the entire artificial fog pipe network area is transformed from the thermal sensation data TSV.sub.i of each target person in the artificial fog pipe network entire area.
[0078] At the step S7, converting the total number X.sub.total of the target person in the artificial fog pipe network area and the total group thermal sensation data TSV.sub.qtotal in the entire artificial fog pipe network area into the total flow Q.sub.total of fog-making water passing into the entire artificial fog pipe network, and a calculation formula of the total flow Q.sub.total of fog-making water is as follows:
Q.sub.total=X.sub.total33 TSV.sub.qtotal×b+e.
[0079] Where b represents a linear regression fitting coefficient; and e represents an intercept.
[0080] At the step S8, converting the number X.sub.i of the target person in each subarea into the opening gear of the atomizing nozzle on an artificial fog pipe network in each subarea through a gear calculation algorithm, and adjusting the opening gear of the atomizing nozzle based on the group thermal sensation data TSV.sub.qi of each subarea and the micro-action type Actt of the target person in each subarea, and spraying the artificial fog pipe network area. Specifically, the gear calculation algorithm is as follows:
[0081] when there are 3 numbers of the target persons or less in the subarea, the opening gear of the atomizing nozzle on the artificial fog pipe network in this subarea is a first gear;
[0082] when there are 3 to 5 numbers of the target persons in the subarea, the opening degree of the atomizing nozzle on the artificial fog pipe network in this subarea is a second gear;
[0083] when there are 5 to 10 numbers of the target persons in the subarea, the opening degree of the atomizing nozzle on the artificial fog pipe network in this subarea is a third gear; and
[0084] when there are 10 numbers of the target persons or more in the subarea, the opening degree of the atomizing nozzle on the artificial fog pipe network in this subarea is a fourth gear.
[0085] Note: 1. the opening degree includes the first to the fourth gears, the higher the gear, the greater the opening of a valve;
[0086] 2. when the group thermal sensation data TSV.sub.qi of each subarea is less than or equal to −2, the opening gear of the atomizing nozzle on the artificial fog pipe network in this subarea is decreased by 1 gear; when the group thermal sensation data TSV.sub.qi of each subarea is greater than or equal to 2, and the opening gear of the atomizing nozzle on the artificial fog pipe network in this subarea is increased by 1 gear;
[0087] 3. when the supercooling action is detected in the subarea, the opening gear of the atomizing nozzle in this subarea is decreased by 1 gear; when the superheating action is detected in the subarea, the opening gear of the atomizing nozzle in this subarea is increased by 1 gear;
[0088] 4. the opening gear of the valve is calculated by the gear calculation algorithm, and then the opening gear of the valve is adjusted by the group thermal sensation data of each subarea and the micro-action type of the target person. The final opening gear (i.e., target opening gear) is calculated as the direct accumulation of the above three adjustment methods of the opening gear.
[0089] 5. if the final opening gear is lower than or equal to 0, the atomizing nozzle on the artificial fog pipe network in this subarea is closed; and if the final opening gear is greater than 4, the opening gear of the atomizing nozzle on the artificial fog pipe network in this subarea is the fourth gear.
[0090] As shown in
[0091] The electronic thermostat system, the human monitoring system, the video monitoring and vision system can continuously collect data in real time and the obtained data is relatively stable.
[0092] 1, The electronic thermostat system includes a temperature and humidity sensor unit and a thermostat. The thermostat is configured to obtain an air dry bulb temperature T.sub.air of outside environment, and compare the air dry bulb temperature T.sub.air with a temperature threshold T.sub.0 to control whether to turn on the human monitoring system.
[0093] The temperature and humidity sensor unit may include a humidity sensor and a temperature sensor such as a T-type thermocouple thermometer, and a temperature measurement range of the T-type thermocouple thermometer is in a range of −20° C. to +60° C. The electronic thermostat system work mainly includes the following steps: the temperature and humidity sensor unit measures that the air dry bulb temperature T.sub.air is t1, the ambient temperature threshold T.sub.0 is set to 30° C., and the thermostat compares the air dry bulb temperature t1 with the set ambient temperature threshold 30° C., when t1 is greater than or equal to 30° C., the thermostat transmits electrical signals to the human monitoring system, and the human monitoring system starts to work; or when t1 is less than 30° C., the thermostat does not transmit the electrical signals.
[0094] 2, The human monitoring system
[0095] The human monitoring system is configured to receive the electrical signals transmitted by the electronic thermostat system, and detect whether there is the target person in an area of an artificial fog pipe network. If there is the target person in the area, the human monitoring system transmits the electrical signals to the video monitoring and vision system; or if there is no the target person, no electrical signals are transmitted.
[0096] 3, The video monitoring and vision system includes a network camera, an area divider, and a data processor.
[0097] The network camera is configured to receive the electrical signals transmitted by the human monitoring system, and record video of the area to obtain video information.
[0098] In an illustrated embodiment, the network camera includes a transmission unit, a memory, a processer, and a computer Python program stored in the memory and executable by the processer; the processer is configured, when executing the computer program, to complete any one of the target detection technology, Eulerian video magnification technology and skeleton node technology of the system described in any one of the embodiments by using a PYtorch framework.
[0099] The area divider is configured to divide the artificial fog pipe network area in the video information into N number of subareas according to the coverage of each of the atomizing nozzles of the artificial fog pipe network, and obtains a location boundary of each of the subareas in the video information.
[0100] The data processer includes a target detector, a skeleton node recognition unit and a Eulerian video magnification unit.
[0101] The target detector is configured to obtain the total number X.sub.total of the target person in the entire artificial fog pipe network area, location information of each target person and the number X.sub.i of the target person in each subarea through the YOLOv5 algorithm model.
[0102] The skeleton node recognition unit is configured to obtain the posture and action of human body through the OpenPose algorithm, make a preliminary judgment on a thermal comfort of the human body, and obtain the micro-acion type Actt of each target person in the entire artificial fog pipe network area.
[0103] The Eulerian video magnification unit is configured to perform Fourier transformation through the Eulerian video magnification algorithm to obtain the skin color saturation. Since the skin color saturation has a linear relationship with the skin temperature, the facial skin temperature t.sub.i of each target person in the entire artificial fog pipe network area can be obtained.
[0104] The video monitoring and vision system is configured to upload the obtained video information, the total number X.sub.total of the target person, the facial skin temperature t.sub.i of each target person, and the micro-action type Actt of each target person to the central processing system in real time.
[0105] In an illustrated embodiment:
[0106] 3.1 The skeleton node recognition unit
[0107] The skeleton node recognition unit is configured to adopt the skeleton node recognition technology to detect the human skeleton nodes in the video information, obtain the human skeleton node information, and recognize the micro-action type Actt of each target person in the video information;
[0108] 3.2 The eulerian video magnification unit
[0109] The Eulerian video magnification unit is configured to use Eulerian video magnification technology to capture subtle changes in the face of each target person, and record the change amplitude and frequency. The Eulerian video magnification unit can capture the contraction changes of facial capillaries and nose breathing. It uses the linear Eulerian video magnification algorithm to calculate the face temperature, and uses Fourier series transformation to obtain skin color saturation. According to the linear relationship between the skin color saturation and the skin temperature, the facial skin temperature t.sub.i of each target person in the whole artificial fog pipe network area can be obtained.
[0110] 3.3 The target detector
[0111] The target detector adopts a YOLOv5 model. Model sizes of different versions of YOLOv5 (You Only Look Once Version 5) are: YOLOv5x with a size of 367 MB, YOLOv51 with a size of 192 MB, YOLOv5m with a size of 84 MB, and YOLOv5s with a size of 27 MB. Although both YOLOv5m and YOLOv5s are relatively small in model size, the accuracy is relatively low. Therefore, without sacrificing too much recognition accuracy, the deployment cost is reduced, and the YOLOv51 version is selected, which is conducive to the rapid deployment of the model and the accuracy is guaranteed.
[0112] 3.3.1 Predicting the total number of the target person and the location information of each target person based on a YOLOv5 target detection model.
[0113] The YOLOv5 target detection model includes GoogleNet+4 convolutions and 2 fully connected layers.
[0114] 1) Taking a frame of the video information every 5 seconds as image sample data, preprocessing the obtained image sample data of each frame, resetting the image resolution according to input requirements of the YOLOv5 target detection model, and normalizing image pixel values to obtain image feature data.
[0115] 2) Inputting the image feature data into the preprocessed YOLOv5 target detection model to obtain best prediction boxes.
[0116] Specifically, the model resizes the image feature data into 448×448 format, and outputs a 7×7×30 grid after passing the image feature data through the convolutional network; each grid predicts 5 numbers of bboxes; the improved Non-maximum suppression algorithm (NMS) is used to screen and intersection and union (IoU) is used judge to obtain the best prediction boxes, and a total number of the best prediction boxes is calculated as the total number X.sub.total of the target person in the entire artificial fog pipe network area.
[0117] 3) Obtaining the best prediction boxes of the image feature data, performing inverse processing on the image feature data with a specific structure, restoring a structure of an original image to be detected, obtaining a location of the human face in the original image to be detected to as the location information of the target person, and using the location information of the target person to mark the image feature data obtained in the step 1) to generate a training sample set.
[0118] In an illustrated embodiment, marking is performed on the video information by using an image annotation software labeling.
[0119] In an illustrated embodiment, the best prediction boxes of the human face in the image to be detected in the current frame is selected from the plurality of prediction boxes according to a preset rule, and the location of the human face in the image to be detected in the current frame is obtained according to the best prediction boxes.
[0120] 3.3.2 The YOLOv5 target detection model is updated based on the training sample set.
[0121] Inputting each training image feature data in the training sample set obtained in the step 3) to the YOLOv5 target detection model to be trained for training, and updating parameters of the YOLOv5 target detection model after training.
[0122] 3.3.3 Uploading calculation results to the central processing system.
[0123] The number of the best prediction boxes output by the YOLOv5 model is taken as the target number X.sub.total and output to the central control system. Each subarea to which each target person belongs is judged according to the location information of the target person and the location boundary of each subarea in the camera captured image. The number X.sub.i of the target person in each subarea is calculated and output to the central control system.
[0124] 4, The central control system includes a thermal sensation predictor, a thermal sensation grouping unit, a flow calculator, and a gear calculator.
[0125] The central control system is configured to control the opening gear of the atomizing nozzle in the artificial fog system and adjust the flow value of spray in the artificial fog system during this period; the number of the target person in the artificial fog system area is used to determine the valve opening data of the solenoid valve of the atomizing nozzle in each subarea. The central control system controls the opening gear of the atomizing nozzle in different subareas to control the artificial fog flow in different subareas according to the number of the target person in different subareas.
[0126] The thermal sensation predictor is configured to convert the air dry bulb temperature T.sub.air and air relative humidity RH.sub.air obtained by the electronic thermostat system, as well as the facial skin temperature t.sub.i of each target person detected by the video monitoring and vision system in the whole artificial fog pipe network area into the thermal sensation data TSV.sub.i of each target person.
[0127] The thermal sensation grouping unit is configured to convert the thermal sensation data TSV.sub.i of each target person into the group thermal sensation data TSV.sub.qi of each subarea and the total group thermal sensation data TSV.sub.qtotal of the entire artificial fog pipe network area.
[0128] The flow calculator is configured to convert the total number X.sub.total of the target person and the group thermal sensation data TSV.sub.qtotal of the entire artificial fog pipe network area into the total flow Q.sub.total of fog-making water flowing into the entire artificial fog pipe network.
[0129] The gear calculator is configured to convert the number X.sub.i of the target person in each subarea and the group thermal sensation data TSV.sub.qi in each subarea into the opening data of the solenoid valve of the atomizing nozzle in different subareas of the artificial fog pipe network through the gear calculation algorithm, and thereby to adjust the opening gear of the atomizing nozzle.
[0130] 5, The thermal environment feedback system
[0131] The thermal environment feedback system is configured to collect a thermal environment subjective questionnaire of each target person in the entire artificial fog area by scanning the two-dimensional codes through the questionnaire feedback unit. When 100 numbers of the thermal environment subjective questionnaires are collected, the thermal environment data is transmitted to the anthropomorphic learning system.
[0132] 6, The anthropomorphic learning system
[0133] The anthropomorphic learning system includes a data receiver, a data set unit, a machine learning trainer, and a comparator.
[0134] The data receiver is configured to receive the thermal environment data of the thermal environment feedback system. The data set unit is configured to randomly divide the thermal environment data into 70% training set and 30% test set. The machine learning trainer is configured to extract the data in the data set unit for training the linear regression integrated learning model in the thermal sensation predictor. The comparator is configured to compare the precision and recall rate of the updated machine learning model after training and the machine learning model before training. If the precision of the machine learning model after training is large, the comparator can update the linear regression integrated learning model in the thermal sensation predictor.
[0135] 7, The data storage system
[0136] The data storage system may include a data receiver, a memory, a data uploader and a data transmitter.
[0137] The data receiver is configured to receive the data transmitted from other systems, and then transmit the data to the memory. The memory is configured to store the data. The data uploader is configured to upload the data in the memory to the cloud network disk once every 1 hour. The data transmitter is configured to selectively transmit data to the other systems.