In-situ thermodynamic model training
12572808 ยท 2026-03-10
Assignee
Inventors
Cpc classification
F24F11/65
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G06F30/27
PHYSICS
G06F30/18
PHYSICS
G06F17/16
PHYSICS
F24F2120/10
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F11/64
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F2140/50
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F2120/20
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
B60H1/00
PERFORMING OPERATIONS; TRANSPORTING
F24F11/64
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F24F11/65
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G06F17/16
PHYSICS
G06F30/18
PHYSICS
G06F30/27
PHYSICS
Abstract
Using processes and methods described herein, a digital twin of a physical space can train itself using sensors and other information available from the building. In some embodiments, a system to be controlled comprises a controller that is connected to sensors. This controller also has a thermodynamic model of the system to be controlled within memory associated with the controller. The thermodynamic model has neurons that represent distinct pieces of a controlled space, such as a piece of equipment or a thermodynamically coherent section of a building, such as a window. The neurons represent these distinct pieces of the controlled space using parameter values and equations that model physical behavior of state with reference to the distinct piece of the controlled state. A machine learning process refines the thermodynamic model by modifying the parameter values of the neurons, using sensor data gathered from the system to be controlled as ground truth to be matched by behavior of the thermodynamic model. The thermodynamic model may be warmed up by running the model using state data as input.
Claims
1. A device for in-situ control model training, comprising: a memory storing a neural network model of a system to be controlled, the neural network model comprising neurons which model thermodynamic coherent sections of the system; at least some of the neurons with multiple parameter values; a processor in communication with the memory configured to: retrieve from the memory the neural network model, the neural network model being comprised of neurons, the neurons with activation functions; retrieve sensor data captured by a controller; determine a threshold value; run the neural network model producing computed sensor values by iteratively: compute a cost function value using a cost function, the computed sensor values and the sensor data, use the cost function to modify at least one parameter value in at least one neuron, and run the neural network model producing iterated computed sensor values, until the cost function value is at or below the threshold value; and wherein at least one of the activation functions of a neuron within the neurons of the neural network model comprises at least two equations that are calculated within the neuron to produce one or more outputs of the neuron.
2. The device of claim 1, further comprising backpropagation to take a gradient of the cost function backward through the neural network model.
3. The device of claim 2, wherein backpropagation is performed using automatic differentiation.
4. The device of claim 1, wherein the neurons comprise input neurons and other neurons, and wherein the activation functions of the other neurons use equations to model physical aspects of individual portions of the system to be controlled.
5. The device of claim 1, wherein the system to be controlled comprises an automated building, a process control system, an HVAC system, an energy system, or an irrigation system.
6. The device of claim 1, wherein the neural network model is warmed up by being run for a period of time which changes neuron parameter values.
7. The device of claim 6, further comprising using optimization to update the at least one parameter value.
8. The device of claim 1, further comprising state data affecting the system to be controlled being used as input into the neural network model.
9. The device of claim 1, wherein the controller is physically within the system to be controlled.
10. The device of claim 1, wherein at least some of the neurons have multiple internal parameters.
11. The device of claim 1, wherein the at least one parameter value in the neuron models has a state in a physical location associated with the neural network model.
12. The device of claim 1, wherein at least a first neuron models a first physical object.
13. The device of claim 12, further comprising a second neuron that models a second physical object, wherein the first physical object outputs to the second physical object, and wherein the first neuron outputs to the second neuron.
14. The device of claim 13, wherein the first physical object is a wall.
15. A method of in-situ neural network training implemented by one or more computers, comprising: retrieving from a memory a neural network model, the neural network model comprising neurons which model thermodynamic coherent sections of a system; at least some of the neurons with multiple parameter values, the neurons with activation functions; determining a threshold value; retrieving sensor data captured by a controller; running the neural network model producing computed sensor values by iteratively: computing a cost function value using a cost function, the computed sensor values, and the sensor data, using the cost function value to modify at least one parameter, and running the neural network model producing computed sensor values, until the cost function value is at or below the threshold value; and wherein at least one activation function of a neuron within the neurons comprises at least two equations that are calculated within the neuron to produce one or more outputs of the neuron.
16. The method of claim 15, wherein an activation function has multiple parameters whose values are passed between neurons.
17. The method of claim 15, wherein the sensor data is a time-state curve.
18. The method of claim 15, further comprising running the neural network model for a first period, checking a parameter value, and when the parameter value is not within a range of a threshold value, running the neural network model for a second period.
19. The method of claim 15, wherein the computed sensor values is a time-state curve.
20. A non-transitory computer-readable storage medium configured with executable instructions to perform a method for training a model in-situ, the method comprising: instructions for retrieving from a memory a neural network model, the neural network model comprising neurons which model thermodynamic coherent sections of a system; at least some of the neurons with multiple parameter values, the neurons with activation functions; instructions for retrieving sensor data captured by a controller; instructions for running the neural network model producing computed sensor values by iteratively: determining a threshold value; computing a cost function value using a cost function, the computed sensor values and the sensor data, using the cost function value to modify at least one parameter value, and running the neural network model producing computed sensor values; until the cost function value is at or below the threshold value; and wherein at least one activation function of one a neuron comprises at least two equations that are calculated within the neuron to produce one or more outputs of the neuron.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1) Non-limiting and non-exhaustive embodiments of the present embodiments are described with reference to the following FIGURES, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14) Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the FIGURES are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments.
DETAILED DESCRIPTION
(15) Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for training a thermodynamic model that describes a building in-situ. Described embodiments implement one or more of the described technologies.
(16) In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present embodiments. It will be apparent, however, to one having ordinary skill in the art that the specific detail need not be employed to practice the present embodiments. In other instances, well-known materials or methods have not been described in detail in order to avoid obscuring the present embodiments.
(17) Reference throughout this specification to one embodiment, an embodiment, one example or an example means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment of the present embodiments. Thus, appearances of the phrases in one embodiment, in an embodiment, one example or an example in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures or characteristics may be combined in any suitable combinations and/or sub-combinations in one or more embodiments or examples. In addition, it is appreciated that the figures provided herewith are for explanation purposes to persons ordinarily skilled in the art and that the drawings are not necessarily drawn to scale.
(18) Embodiments in accordance with the present embodiments may be implemented as an apparatus, method, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a module or system. Furthermore, the present embodiments may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
(19) Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. Computer program code for carrying out operations of the present embodiments may be written in any combination of one or more programming languages.
(20) Embodiments may also be implemented in cloud computing environments. In this description and the following claims, cloud computing may be defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned via virtualization and released with minimal management effort or service provider interaction, and then scaled accordingly. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).
(21) The flowchart and block diagrams in the flow diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
(22) As used herein, the terms comprises, comprising, includes, including, has, having, or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, article, or apparatus.
(23) Further, unless expressly stated to the contrary, or refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
(24) Optimize means to improve, not necessarily to perfect. For example, it may be possible to make further improvements in a value or an algorithm which has been optimized.
(25) Determine means to get a good idea of, not necessarily to achieve the exact value. For example, it may be possible to make further improvements in a value or algorithm which has already been determined.
(26) Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of any term or terms with which they are utilized. Instead, these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as being illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized will encompass other embodiments which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms.
I. Overview
(27) Using processes and methods described herein, a building can commission itself. This commissioning may entail running the model, checking the state values within the model against historical state values within the physical building represented by the thermodynamic model, and then automatically modifying parameters in the thermodynamic model to more closely represent actual building behavior. A digital twin of a physical space can train itself using sensors and other information available from the building. In some embodiments, a system to be controlled comprises a controller that is connected to sensors. This controller also has a thermodynamic model of the system to be controlled within memory associated with the controller. The thermodynamic model has neurons that represent distinct pieces of a controlled space, such as a piece of equipment or a thermodynamically coherent section of a building, such as a window. The neurons represent these distinct pieces of the controlled space using parameter values and equations that model physical behavior of state with reference to the distinct piece of the controlled state. A machine learning process refines the thermodynamic model by modifying the parameter values of the neurons, using sensor data gathered from the system to be controlled as ground truth to be matched by behavior of the thermodynamic model. The thermodynamic model may be warmed up by running the model using state data, which may be gathered by sensors, as input.
(28) The model that underlies the disclosed system starts with a first-principles, physics-based approach. The sub-models that comprise the multi-agent building representation may fall into four distinct categories: external environment, occupants and activity, building envelope and zones, and subsystems. Environment models may use an array of external sensors and online data sources (e.g. meteorological feeds like the NDFD) to accurately gauge current conditions and predict near-future loads on the building system. Occupant, asset, and activity models may utilize real-time data from sensors inside the building, usage profiles, locality, human comfort models, asset comfort, and dynamic occupant models developed heuristically from sensors and indicators to determine occupancy behavior. The envelope and zone models may work together with the environmental and occupant models to assess internal heating, cooling, and ventilation demands. Finally, building subsystem and process control models may consist of a diverse array of energy and motive systems including HVAC components, operable envelope systems, daylighting, renewable energy systems, conveyors, etc. This organization may allow deep data extraction which is not possible in a conventional analytics system. For example, a conventional analytics system can only track whether a pump is signaled on versus off. The disclosed system may be able to extract rotor speed, flow rates, pressure, fluid type, and errors, as well as the corresponding quality of data measures. This deep data extraction is made possible due to the inter-validation of physical properties in the computer models that mimic the actual physical structure. These models may be referred to as Digital Twin models. This enables users to create complex systems of interconnected building zones by ad hoc means, use simple graphical user interfaces to define a system, or enable a digital system model to evolve its control optimization and commissioning over time, in situ.
(29) With reference to
(30) A computing environment may have additional features. For example, the computing environment 100 includes storage 140, one or more input devices 150, one or more output devices 155, one or more network connections (e.g., wired, wireless, etc.) 160 as well as other communication connections 170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 100, and coordinates activities of the components of the computing environment 100. The computing system may also be distributed; running portions of the software 185 on different CPUs.
(31) The storage 140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within the computing environment 100. The storage 140 stores instructions for the software, such in-situ training software 185.
(32) The input device(s) 150 may be a device that allows a user or another device to communicate with the computing environment 100, such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to the computing environment 100. For audio, the input device(s) 150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 155 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 100.
(33) The communication connection(s) 170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal. Communication connections 170 may comprise input devices 150, output devices 155, and input/output devices that allows a client device to communicate with another device over network 160. A communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood that network 160 may be a combination of multiple different kinds of wired or wireless networks. The network 160 may be a distributed network, with multiple computers, which might be building controllers, acting in tandem. A computing connection 170 may be a portable communications device such as a wireless handheld device, a cell phone device, and so on.
(34) Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 100, computer-readable media include memory 120, storage 140, communication media, and combinations of any of the above. Computer readable storage media 165 which may be used to store computer readable media comprises instructions 175 and data 180. Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over the communications connections 170. The computing environment 100 may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which has CPU 110, a GPU 115, Memory 120, input devices 150, communication connections 170, and/or other features shown in the computing environment 100. The computing environment 100 may be a series of distributed computers. These distributed computers may comprise a series of connected electrical controllers.
(35) Although the operations of some of the disclosed methods are described in a particular sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially can be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods, apparatus, and systems can be used in conjunction with other methods, apparatus, and systems. Additionally, the description sometimes uses terms like determine, build, and identify to describe the disclosed technology. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.
(36) Further, data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats. Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment.
(37)
II. Exemplary System for In-situ Control Model Training
(38) Using processes and methods described herein, a space can commission itself. This commissioning may entail running the model, checking the state values within the model against historical state values within the physical building represented by the thermodynamic model, and then automatically modifying parameters in the thermodynamic model to more closely represent actual building behavior.
(39)
(40) In some embodiments, the sensor data captured 310 is recorded as a chronological (time-based) state curve, e.g., when the state is temperature, this will be a heat curve. The system may have many zones; e.g., areas whose data is being measured. A separate state curve may be used for each zone that is modeled. This curve (or these curves, for a multi-zone model) will be used as ground truth to refine the building simulation. These curves may be called state load curves.
(41) The system to be controlled 305 may have state around it that affects it. For example, a building is affected by the temperature outside. It is also influenced by wind, time of day, time of year, angle the building is at, current humidity, etc. This state data affecting the system to be controlled 312 may be used as input into a thermodynamic model 320.
(42) A controller 315 stores the thermodynamic model 320 of the system to be controlled 305. The controller may incorporate all or part of the computing environment 100. When a thermodynamic model is being built, in an exemplary structure embodiment, the component portions of the system to be controlled 305 that have different thermodynamic qualities are generally defined. These (for an embodiment), may be broken down, in decreasing complexity, into building, floor, zone, surface, layer, and materials. Layers are composed of materials, surfaces are composed of layers, and so on. In some embodiments rather than using the entire structure, the structure space is disaggregated, and then the state space is reduced by only using relevant parts of system. Neurons 325 may be considered a component portion that has thermodynamic qualities. In an exemplary embodiment, an entire building may be considered a neuron 325. In another embodiment, a specific portion of a wall, such as drywall, may be considered a neuron 325.
(43) A neuron 325 has a parameter value 330. This parameter value may represent a physical constants of an object. For example, this value may be a resistance value or a capacitance value. The value may be a lower-level value that allows a value such as a resistance or capacitance value to be determined, such as heat transfer rate, thermal conductivity, etc. Some embodiments may have multiple values 330 for each neuron 325.
(44) A machine learner 335 may be used to run the thermodynamic model 320. In some embodiments, such as when the thermodynamic model 320 is being optimized to more accurately mimic actual historical data, the machine learner 335 may be used in updating parameter values 330. This may be done using probes into the simulation. The probes are, in some embodiments, calls into a data structure that holds the simulation values. The probe calls ask for and receive parameter values. They may also change parameter values. In some embodiments, the parameter values are changed by the processes of the machine learning algorithm. A machine learning process used by the machine learner 335 may be one of a variety of computer algorithms that improve automatically through experience. Common machine learning processes are Linear Regression, Logistic Regression, Decision Tree, Support Vector Machine (SVM), Naive Bayes, K-Nearest Neighbors (kNN), K-Means Clustering, Random Forest, Backpropagation with optimization, etc.
(45) In some embodiments, the machine learner 335 feeds values into a thermodynamic model 320. This thermodynamic model may be a structure simulation model, a resource simulation model, a comfort simulation model, etc. A structure simulation model may be a neural network or other machine learning model of a physical area that incorporates thermodynamic information about the system to be controlled 305. A resource simulation model may be a neural network or other machine learning model of resources in a physical area that incorporates thermodynamic information about the resources within the system to be controlled 305. A comfort model may be a neural network or other machine learning model that incorporates various comfort functions that an area may desire, such as a specific amount of comfort for humans, or inanimate objects. For example, a musical instrument may require temperature between certain values, and humidity between certain values. These temperature and humidity values may be tied to each other, in that a temperature within a first temperature range may require humidity within a first humidity range, while a temperature within a second temperature range may require humidity within a second humidity range.
(46) The thermodynamic model may be heterogenous. A heterogenous model may be a neural network model that has heterogenous neurons. These heterogenous neurons may have different activation functions. These different activation functions may use equations to model physical aspects of individual portions of a system. Examples may be a neuron that represents a pump and has an activation function that comprises equations that model physical pump behavior. This neuron may also comprise parameter values 330, inputs that comprise pump-specific aspects, such as shaft speed, flow rates, etc. Another example may be a structure simulation model that comprises a neuron 325 that has an activation function that comprises equations that comprise state behavior of a physical portion of the building, such as a wall. Such an activation function may comprise parameter values (that may be input variables) that comprise specifics of the wall such as layer mass, thermal capacitance, and other wall-specific features. In an exemplary structure embodiment, the component portions of the system to be controlled 305 that have different thermodynamic qualities are generally defined. These (for an embodiment), may be broken down, in decreasing complexity, into building, floor, zone, surface, layer, and materials. Layers are composed of materials, surfaces are composed of layers, and so on. In some embodiments rather than using the entire structure, the structure space is disaggregated, and then the state space is reduced by only using relevant parts of system. Neuron 325 may be considered a component portion that has thermodynamic qualities. In an exemplary embodiment, an entire building may be considered a neuron 325. In another embodiment, a specific portion of a wall, such as drywall, may be considered a neuron 325.
(47) An updater 345 determines how the parameter values affect the cost function and then adjusts the parameter values 330, which might be within neurons 325 to minimize the cost function.
(48) An iterator 350 runs the thermodynamic model with the state data affecting the system to be controlled 312 producing simulated output data, runs the cost function determiner to determine how close the sensor data is to the simulated output data, and runs the updater to incrementally optimize the parameter values in the thermodynamic model, and updates the parameter values within the thermodynamic model, until a cost produced by the cost function determiner reaches a goal state.
(49) The machine learner 335 may also be used to optimize the model so it closely matches the behavior of the actual system to be controlled 305, equipment in the system to be controlled 305, etc., of which the the model is a digital twin.
(50)
(51)
(52)
(53) If the derivatives are differentiable, then a backpropagator 615 may be used to determine the gradients. Backpropagation finds the derivative of the error (given by the cost function) for the parameters in the thermodynamic model, that is, backpropagation computes the gradient of the cost function with respect to the parameters within the network. Backpropagation calculates the derivative between the cost function and parameters by using the chain rule from the last neurons calculated during the feedforward propagation (a backward pass), through the internal neurons, to the first neurons calculated. In some embodiments, an automatic differentiator 620 may use autodifferentiation to find the gradients. According to Wikipedia, automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. Other methods may be used to determine the parameter gradients. These include Particle Swarm and SOMA ((Self-Organizing Migrating Algorithm), etc. The backpropagation may determine a negative gradient of the cost function, as the negative gradient points in the direction of smaller values.
(54) After the gradients are determined, a parameter optimizer optimizes the parameter value(s) 330 to lower the value of the cost function with respect to the parameter value(s). Many different optimizers may be used, which can be roughly grouped into 1) gradient descent optimizers 635 and 2) non-gradient descent optimizers 640. Among the gradient descent methods 635 are standard gradient descent, stochastic gradient descent, and mini-batch gradient descent. Among the non-gradient descent methods 640 are Momentum, Adagrad, AdaDelta, ADAM (adaptive movement estimation), and so on.
(55)
III. Exemplary Method Embodiment
(56)
(57) In some embodiments, method 800 may be implemented in one or more processing devices (e.g., a digital or analog processor, or a combination of both; a series of computer controllers each with at least one processor networked together, and/or other mechanisms for electronically processing information etc.) The one or more processing devices may include one or more devices executing some or all of the operations of method 800 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 800.
(58) At operation 805, a thermodynamic model is received. The thermodynamic model may have been stored in memory, and so may be received from the processing device that the model is being run on. In some implementations, the thermodynamic model may be stored within a distributed system, and received from more than one processor within the distributed system, etc. A controlled device is a device that has controls, such as on-off switches, motors, variable controls, etc. such that a computer can modify its behavior. These controls may be wired, wireless, etc.
(59) In some embodiments described herein, in a thermodynamic, the fundamentals of physics are utilized to model component parts of a structure to be controlled as neurons in a thermodynamic neural network. Some neurons use physics equations as activation functions. Different types of neuron may have different equations for their activation functions, such that a thermodynamic model may have multiple activation functions within its neurons. When multiple components are linked to each other in a schematic diagram, a thermodynamic model is created that models the components as neurons. The values between the objects flow between the neurons as weights of connected edges. These neural networks may model not only the real complexities of systems but also their emergent behavior and the system semantics. Therefore, they may bypass two major steps of the conventional AI modeling approaches: determining the shape of the neural net, and training the neural net from scratch.
(60) As the neurons are arranged in order of an actual system (or set of equations) and because the neurons themselves comprise an equation or a series of equations that describe the function of their associated object, and certain relationships between them are determined by their location in the neural net, a huge portion of training is no longer necessary, as the neural net itself comprises location information, behavior information, and interaction information between the different objects represented by the neurons. Further, the values held by neurons in the neural net at given times represent real-world behavior of the objects so represented. The neural net is no longer a black box but itself contains important information. This neural network structure also provides much deeper information about the systems and objects being described. Since the neural network is physics- and location-based, unlike the conventional AI structures, it is not limited to a specific model, but can run multiple models for the system that the neural network represents without requiring separate creation or training.
(61) In some embodiments, the neural network that is described herein chooses the location of the neurons to tell you something about the physical nature of the system. The neurons are arranged in a way that references the locations of actual objects in the real work. The neural network also may use actual equations that can be used to determine object behavior into the activation function of the neuron. The weights that move between neurons may be equation variables that are used within the activation functions. Different neurons may have unrelated activation functions, depending on the nature of the model being represented. In an exemplary embodiment, each activation function in a neural network may be different.
(62) As an exemplary embodiment, a pump could be represented in a neural network as a network neuron with multiple variables (weights on edges), some variables that represent efficiency, energy consumption, pressure, etc. The neurons will be placed such that one set of weights (variables) feeds into the next neuron (e.g., with equation(s) as its activation function) that uses those variables. Unlike other types of neural networks, two required steps in earlier neural network versionsshaping the neural net, and training the modelmay already be performed. Using embodiments discussed herein the neural net model need not be trained on some subset of information that is already known. In some embodiments, the individual neurons represent physical representations. Individual neurons may hold parameter values that help define the physical representation. As such, when the neural net is run, the parameters helping define the physical representation can be tweaked to more accurately represent the given physical representation.
(63) This has the effect of pre-training the model with a qualitative set of guarantees, as the physics equations that describe objects being modeled are true, which saves having to find training sets and using huge amounts of computational time to run the training sets through the models to train them. A model does not need to be trained with information about the world that is already known. With objects connected in the neural net similar to how they are connected in the real world, emergent behavior arises in the model that, in certain cases, maps to the real world. This model behavior that is uncovered is often otherwise too computationally complex to determine. Further, the neurons represent actual objects, not just black boxes. The behavior of the neurons themselves can be examined to determine behavior of the object, and can also be used to refine the understanding of the object behavior. One example of heterogenous models is described in U.S. patent application Ser. No. 17/143,796, filed on Jan. 7, 2021, which is incorporated herein in its entirety by reference.
(64) At operation 810, an input is received is received. This input may be state data that affects the system to be controlled 312. That is, this may be weather data that has affected a building during the time sensor data 310 has been gathered.
(65) At operation 815, the desired output curve(s) are received. These are the curves that describe the state that a system to be controlled 305 has registered over a defined period of time. This may be actual sensor data gathered over the same time as the input, or simulated sensor data, for systems to be controlled that have yet to be built.
(66) At operation 820, a thermodynamic model is run. Running the model may entail feedforwardrunning the input though the model to the outputs over time T(0)-T(n), capturing state output valueswithin neurons that represent resources that modify state, within neurons that define structure thermodynamic values, etc., over the same time T(0)-T(n). At operation 825, simulated output curve(s) are output by the thermodynamic model. In some embodiments, the output curve is output 825 successively in timesteps during the model run, in in some embodiments, other methods are used.
(67) At operation 830, a cost function is computed using the desired output curve(s) and the model output. The cost function measures the difference between the time series of desired output curve(s) 815 and the simulated output curve(s) output 825 from the thermodynamic model. Details of the cost function are described elsewhere.
(68) At operation 835, a goal state is checked to determine if a stopping state has been reached. The goal state may be that the cost from the cost function is within a certain value, that the program has run for a given time, that the model has run for a given number of iterations, that a threshold value has been reached, such as the cost function should be equal or lower than the threshold value, or a different criterion may be used. If the goal state has not been reached, then a new set of inputs needs to be determined that are incrementally closer to an eventual answera lowest (or highest or otherwise determined) value for the cost function, as described elsewhere.
(69) At operation 840, if the goal state 835 has determined that a stopping state 850 has been reached, then the model has been substantially trained; that is, the output simulated curve is similar enough to the desired output curve within some range. This method can save as much as 30% of energy costs over adjusting the state when the need arises. If the goal state has not been reached, then the determine new parameter values step 840, modify parameter values in model step 845, the run thermodynamic model step 820, the output simulation curve step 825, and compute cost function step 830 are iteratively performed, which incrementally optimizes the thermodynamic model as represented by the output simulated curve until the goal state 835 is reached.
(70) At operation 845, parameter values within the thermodynamic model are modified. These modifications may be determined by using machine learning. Machine learning techniques may comprise determining gradients of the various variables within the thermodynamic model with respect to the cost function. Once the gradients are determined, gradient methods may be used to incrementally optimize the control sequences. The gradient at a location shows which way to move to minimize the cost function with respect to the inputs. In some embodiments, gradients of the internal variables with respect to the cost function are determined. In some embodiments, internal parameters of the neurons have their partial derivatives calculated. Different neurons may have different parameters. For example, a neuron modeling a pump may have parameters such as density, shaft speed, volume flow ratio, hydraulic power, etc. If the derivatives are differentiable, then backpropagation can be used to determine the partial derivatives, which gives the gradient.
(71) After the gradients are determined, the parameter values are optimized to lower the value of the cost function with respect to the specific parameters. This process is repeated incrementally, as discussed elsewhere.
(72) At operation 845, the parameter values within the thermodynamic model that have been optimized are modified within the thermodynamic model. As these parameter values are within neurons, there is not a single input layer that is modified, rather, the individual parameter values that reside within neurons are modified. These parameter values may be set up within the thermodynamic model as inputs to the individual neurons, then the inputs are changed to the new parameter values, or another method may be used, such as individually changing the parameter values through changing database values, etc.
(73) After the parameter values within the thermodynamic model are modified, then the thermodynamic model is rerun with the new parameter values but the same input 810. The thermodynamic model is rerun with new parameter values and the same input until the goal state is reached.
(74)
(75)
(76) In light of the above, in some embodiments, state data that will be used as input into the thermodynamic model 1005 may be gathered for a time prior to the sensor data being collected. The gathered state data 1005 is run through the thermodynamic model 1010 for awhile, then at a given time, the simulated output time-state curve values begin to be collected. In the example shown, time-state state data 1005 is run from t0 to t50 without simulated output curves being collected from the thermodynamic model 1010. As depicted in
(77)
(78) For example, let us assume that Neuron 1 is a pump, Neuron 2 is an electric boiler and neuron 3 is a heating coil. Neuron 4 1440, Neuron 5 1455 and Neuron 6 1460 are neurons from other portions of the neural network. For example, Neurons 4, 5 and 6 may send signals to turn on their downstream devices, etc. In this example, water 1420 flows from the pump to the boiler, and then to the heating coil. This water 1420 may have, as inputs, parameters with values such as temperature, mass flow rate, and pressure, for the three inputs shown 1420. These inputs describe state or other types of values flowing through the system modeled by the neural network 1400.
(79) Neurons may have other inputs, such as parameter values that represent physical constants of the objects being modeled. these inputs may be permanent inputs that describe the composition of the matter being modeled, physical characteristics of an object being modeled, etc. Changing these parameter values (e.g., 330) may change the way the physical object behaves. For example, a pump may have inputs that describe its actual function, in this illustrative embodiment, are used by the neuron they are attached to exclusively. Their parameter value 330 is passed along an edge, e.g., 1425, to their connected neuron. The value is used by the activation function of the neuron, but, in some embodiments, is not otherwise passed on.
(80) The three inputs 1420 are modified in Neuron 1 by its activation function which models pump behavior, and then, in this case, exit 1450 with different parameter values. The activation function may use Parameter A 1430. These input parameters 1420 with their new values are then used as inputs in the next neuron downstream, Neuron 2 1410, which then passes to Neuron 3 1415. Neuron 3 1415 then outputs, e.g., as heated air 1445, etc. The activation function for Neuron 2 may use the parameter B value 1440; the activation function in Neuron 3 may use Parameter C value 1465, and so on.
(81) Some machine learning methods use forward and back propagation to run the thermodynamic model 820. During forward propagation, in some embodiments, data is fed through the inputs through the neurons the direction of the arrows to the outputs. Values of Parameters A 1430, B 1440, and C 1465 will not be changed during feedforward, as there are no inputs into these. The activation function may be calculated using all parameters present in the neuron.
(82)
(83) After the partial derivatives are taken, a portion of the input data is optimized to lower the cost function. Optimizers, as discussed earlier, are algorithms that used to change the parameters within the neural network to reduce the cost function. In some cases, gradient descent is performed only for the parameter values that represent physical constants (e.g. 330). For example, inputs of type 1 only may be determined by an optimizer, or inputs of types 2 only may be determined by an optimizer. For example, Parameters A 1430, B 1440, and C 1465 have to do with the physical nature of the system, in this case, a pump-boiler-heating coil system. Optimizing them optimizes the ability of its corresponding neural network to more closely model the system behavior.
(84) The networks described herein may be heterogenous neural networks as described with reference to
(85) In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.