METHOD OF PROTECTING A COMMUNICATION NETWORK
20180013783 · 2018-01-11
Assignee
Inventors
Cpc classification
International classification
Abstract
A method of determining a quantitative measure of the danger (a Trust/Risk) that a select network entity poses to the security and integrity of a communications network, the method includes setting a plurality of parameters. The parameters define the degree to which various behaviors within the communications network are considered usual or anomalous. Actual behavior of the select network entity is observed by watching network traffic using network packet-collection, recording packet properties, and using the packet properties to associate a select packet with the select network entity. Self-report messages broadcast by the select network entity are also observed. The Trust/Risk of the select network entity is then determined based on a comparison of the actual behavior to the self-report message and a comparison of the actual behavior to the plurality of parameters.
Claims
1. A method of protecting a communications network by determining a quantitative measure of the danger (a Trust/Risk) that a select network entity poses to the security and integrity of the communications network, the method comprising: observing an actual behavior of the select network entity by: watching network traffic using network packet-collection; recording packet properties; and using the packet properties to associate a select packet with the select network entity; observing a self-report message broadcast by the select network entity; setting a plurality of parameters defining the degree to which various behaviors within the communications network are considered usual or anomalous; and determining the Trust/Risk of the select network entity based on: a comparison of the actual behavior to the self-report message; and a comparison of the actual behavior to the plurality of parameters.
2. The method of claim 1, further comprising: observing a plurality of actual behaviors of a plurality of network entities; and broadcasting, by the plurality of network entities, reports of the actual behaviors of others of the plurality of network entities, wherein determining the Trust/Risk of the select network entity is further based on the reports of the actual behaviors of the plurality of network entities.
3. The method of claim 2 further comprising: identifying, with respect to each actual behavior observed, an observance time representing the time at which the actual behavior was observed, wherein determining the Trust/Risk of the select network entity further includes applying a Dynamic Forgetting Algorithm discounting actual behaviors based on respective observance times such that the actual behaviors are given less weight when their respective observance time are further in the past.
4. The method of claim 3 wherein the Dynamic Forgetting Algorithm is designed to avoid exploitation attempts by accounting for anomalous actual behaviors which consistently repeat.
5. The method of claim 4, further comprising: assigning an importance level to each of the plurality of network entities, the importance level quantitatively characterizing the importance of that respective network entity to the security and integrity of the communications network; and determining the Trust/Risk of the plurality of each of the network entities based on: a comparison of the actual behavior to the self-report message of that respective entity; a comparison of the actual behavior of the respective network entity to the plurality of parameters; and the importance level of the respective network entity.
6. The method of claim 5, further comprising: determining a Trust/Risk value for a target entity based on: a comparison of actual behavior of the target entity to a self-report message of the target entity; and a comparison of the actual behaviors of the target entity to the plurality of parameters; and constructing a Threat Graph, the Threat Graph being machine readable and providing a Threat Value, the Threat Value being a quantitative representation of the danger posed by the select entity on the target entity, wherein determining the Trust/Risk values for the select entity and the target entity is further based on Threat Value.
7. The method of claim 6 further comprising: applying condition-action logic, by a processing module, to determine an action to take based on the Trust/Risk of the select network entity to avoid harm to the communications network.
8. The method of claim 5 further comprising: associating the select entity with at least one role identifier, wherein determining the Trust/Risk of the select network entity is further based on whether the actual behavior of the select entity is an expected behavior based on the at least one role identifier.
9. The method of claim 8 wherein determining the Trust/Risk further comprises evaluating effectiveness of previously determined levels of Trust/Risk and past methods of determining the previously determined levels of Trust/Risk.
10. A method of protecting a communications network by determining a quantitative measure of the danger (a Trust/Risk) that a select network entity poses to the security and integrity of the communications network, the method comprising: determining at least one role identifier associated with the select network entity; setting a plurality of parameters defining the degree to which various behaviors within the communications network are considered usual or anomalous; observing an actual behavior of the select network entity by: watching network traffic using network packet-collection; recording packet properties; and using the packet properties to associate a select packet with the select network entity; identifying a plurality of similar network entities sharing, the plurality of similar network entities sharing at least one role identifier with the select network entity; observing a neighbor behavior associated with each of the plurality of similar network entities by: watching network traffic using network packet-collection; recording packet properties; and using the packet properties to associate a neighbor packet with the similar network entity; and determining the Trust/Risk of the select network entity based on: a comparison of the actual behavior of the select entity and the neighbor behavior of at least one of the plurality of similar network entities; and the plurality of parameters.
11. The method of claim 10 wherein: the select network entity is a first sensor, measuring an observable feature; and at least one of the plurality of similar network entities is a second sensor that measures the observable feature measured by the first sensor.
12. A system for safely running a network comprising: a processor coupled to a network interface and memory containing computer-readable code, such that when the computer-readable code is executed by the processor, the processor performs the following operations: observing a plurality of behaviors, each behavior associated with a network entity, wherein observing the behaviors includes: receiving a plurality of packets from the network interface; assigning each of the plurality of packets to one of the network entities based on identifying information in the packet; and recording information about the packet in a data structure indexed for each network entity; identifying a plurality of self-reports corresponding to each network entity from the plurality of network packets; determining a Trust/Risk value for each of the network entities based on a divergence between the behavior associated with the respective network entity and the self-report from the respective network entity; generating a results report based on a degree of anomaly of at least one of the network entities, the degree of anomaly calculated by comparing the Trust/Risk of the respective network entities to predefined statistical parameters to evaluate the degree to which said entities' behavior is usual or anomalous; using the degree of anomaly to determine whether a warning should be issued; and recording the Trust/Risk value of each network entity in persistent storage.
13. The method of claim 12, further comprising: observing, by the network entities, behaviors of neighbor network entities; and reporting, by the plurality of network entities, neighbor reports related to the behavior of the neighbor network entities, wherein determining the Trust/Risk of each network entity is further based on a comparison between the behavior of the network entity, the self-reports of the network entity, and the neighbor reports.
14. The method of claim 13 wherein the step of observing the plurality of behaviors further includes applying a Dynamic Forgetting Algorithm discounting the behaviors based on respective observance times, the behaviors given less weight when their respective observance times are further in the past.
15. The method of claim 14 wherein the Dynamic Forgetting Algorithm is designed to avoid exploitation attempts by accounting for anomalous behaviors which are consistently repeated by one of the network entities.
16. The method of claim 12 wherein: at least one of the network entities is a first sensor, the first sensor measuring an observable feature; at least one of the network entities is a second sensor that measures the observable feature measured by the first sensor; and determining a Trust/Risk for the first sensor further includes comparing the behavior of the first sensor with behavior of the second sensor.
17. The method of claim 15, wherein: an importance level is assigned to each of the plurality of network entities, the importance level quantitatively characterizing the importance of that respective network entity to the security and integrity of the network; and determining the Trust/Risk of each network entity is further based on the importance level assigned to that respective network entity.
18. The method of claim 17, further comprising: constructing a Threat Graph, the Threat Graph being machine readable and providing a Threat Value, the Threat Value being a quantitative representation of the danger posed by the an attacking entity on a target entity, the attacking entity and target entity being part of the plurality of network entities, wherein determining the Trust/Risk for each of the plurality of network entities is further based on Threat Value of that respective network entity.
19. The method of claim 17 further comprising: associating each of the network entities with at least one role identifier, wherein determining the Trust/Risk of each of the network entities is further based on whether the behavior of the respective network entity is an expected behavior based on the at least one role identifier associated with the respective network entity.
20. The method of claim 17 further comprising: applying condition-action logic, by a processing module, to determine an action to take based on the Trust/Risk of the each of the network entities to avoid harm to the network.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] So that those having ordinary skill in the art to which the disclosed system pertains will more readily understand how to make and use the same, reference may be had to the following drawings.
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION
[0023] The subject technology overcomes many of the prior art problems associated with operating a communications network. In brief summary, the subject technology provides a system and method where anomalous or dangerous activity is identified within a communications network and appropriate remedial action can be taken. The advantages, and other features of the systems and methods disclosed herein, will become more readily apparent to those having ordinary skill in the art from the following detailed description of certain preferred embodiments taken in conjunction with the drawings which set forth representative embodiments of the subject technology. Like reference numerals are used herein to denote like parts.
[0024] As used herein, certain terms and phrases of art are defined as follows:
[0025] “Trust” is a dynamic (time-varying) quantitative measure (a number or an ordered set of numbers) of how reliably we expect a network entity (such as a user, workstation, server, device, or service) to behave based on prior behaviors.
[0026] “Threat” represents information inherent in the network that measures the damage that could be done if a particular network entity (such as a node or set of nodes) is attacked. It is a predicted property.
[0027] “Risk” is a measure that is computed based on the predicted value of threat and the dynamic value of trust.
[0028] “Behavioral Trust Measure” (or “BTM”) is a measure of the assurance that an entity in the network will “play fair”, i.e. will follow the rules and cooperate rather than taking advantage of network resources for its own self-interest. Risk and Resilience Metric (“RRM”) is a value that varies inversely with Trust; it is a measure of the likelihood that an entity has become a bad network citizen, likely to do some sort of harm to the functioning of the network. The combination of the two measures is called BTM/RRM, or more descriptively, “Trust/Risk.”
[0029] “Trust/Risk” is a quantitative measure of the danger that a particular network entity poses to the integrity and security of the network under observation. The Trust/Risk model that we have developed has three essential components: mathematical properties, multi-dimensional Trust/Risk metrics, and behavior detection.
[0030] Mathematical Properties of Trust/Risk. Trust/Risk is established based on observed behaviors of an entity. In order to quantitatively evaluate Trust/Risk we identify mathematical properties of Trust/Risk values. Direct Trust/Risk is established when the behavior of entity A is directly observed. For instance, if a router self-reports that it has forwarded ten thousand packets to a particular subnet in the past thirty minutes, but a packet sniffer on that subnet only sees five thousand packets, then trust in the server is reduced and its risk score is raised. Similarly, but in a different network environment, if sensor A self-reports to a local programmable logic controller (PLC) that it has detected a power fluctuation at a particular time, but other sensors reporting to the same PLC contradict this report, then the true alarm Trust/Risk of sensor A will be reduced.
[0031] Trust/Risk is a dynamic characteristic. A good entity may be compromised and turned into a malicious one, while an incompetent entity may become competent due to environmental changes. In order to track these dynamics, an observation made a long time ago should not carry the same weight as one made recently. We have developed a dynamic forgetting scheme (or “Dynamic Forgetting Algorithm”) that allows an entity's Trust/Risk value to be redeemed with time and with subsequent good behaviors. The dynamic forgetting scheme allows for a single bad behavior to be forgotten more quickly than multiple bad behaviors. We have further developed a value called predictability Trust/Risk, which allows us to take into account a smart attacker who might try to take advantage of the dynamic forgetting scheme by behaving well and badly in an alternating pattern. We apply all of these mathematical properties to our Trust/Risk computation.
[0032] Multi-dimensional Trust/Risk Metrics. Trust/Risk can be multi-dimensional. That is, it can be determined by more than one behavior that the entity performs, and therefore, it may require aggregation of several types of Trust/Risk value. For example, in a Smart Grid electrical system, a sensor node, like a phasor measurement unit (“PMU”), is designed to detect and report on fluctuations in the power that is being transmitted. Thus, we could compute the following types of Trust/Risk based on the behaviors of these sensor nodes: detection Trust/Risk represents how much we trust a sensor to detect and report a power fluctuation; false alarm Trust/Risk represents how much we trust a sensor to report only when a power fluctuation is detected (no false alarms); availability Trust/Risk represents how much we trust a sensor to respond to a request for a report; and overall Trust/Risk is computed as an aggregate of all of the above Trust/Risk values. Overall Trust/Risk can be used to make decisions about how to self-heal a network, and it can be used by a human operator or by a software decision function to choose remedial actions.
[0033] Behavior Detection. In order for Trust/Risk to be utilized, the behaviors of the entities in the system must be monitored and bad behaviors must be detected. It is important to note here that in most detection scenarios, it is difficult to distinguish between a malicious behavior and an error in the system when examining each behavior in isolation. The subject technology will consider patterns of behavior as well as behavior of groups of network entities to make ultimate determinations of maliciousness. We define three levels of behavior detection. These three levels are used in conjunction with each other to provide a basis for computation of Trust/Risk for an entity.
[0034] The first level of behavior detection is known as range-based detection. This assumes that a range of expected values is known for a given entity. If the entity self-reports values that are well outside the expected range, this may be considered an indication of a bad behavior. For instance, a public-facing Web Server that self-reports receiving only a few dozen requests per hour is likely to be either compromised or malfunctioning, and a small business's email server should not be sending many thousands of emails per day. Similarly, in a Smart Grid system, if a sensor near a transmission substation is expected to report voltage values within a certain range, but at some point self-reports a very high voltage, outside of the expected range, this is considered a bad behavior.
[0035] The second level of detection is known as consistency-based detection, and it relies on the fact that network observations tend to provide redundant information. A simple example is found in sensor networks, using the values of multiple sensors (e.g. multiple temperature readings) to determine if there is some inconsistency in the self-reports. When the self-reports of sensors that are in close proximity, or sensors that are reporting on the same events, are inconsistent with each other, there is a good chance that some of those sensors are malicious. From this information alone, it may be difficult to determine which of the sensors is correct, but this type of detection can be used along with the other types to help identify the malicious behavior.
[0036] The third type of behavior detection is known as environmental observation detection. Here an appropriate example can be found in a heterogeneous network containing Smart Grid sensors and intelligent network devices; if a sensor indicates that power has been cut from a section of the grid, but there are devices in that section that continue to report normal operation, this is an indication that the sensor is reporting incorrectly.
[0037] A “communication network” is an assemblage of computers and their peripherals, electronic switching and routing devices, physical data channels, sensors and actuators, mobile devices with computing capability, and users, as well as one or more sets of protocols and procedures, in which information is communicated between and among devices and users in the form of packets transmitted on physical data channels.
[0038] An “information resource” is an object such as a sensor, file, directory, database, or processing service that can provide useful information to another user, device, or service using the network.
[0039] A “network entity” (or “network element”) is any component of a communication network that has a persistent identity; examples include, but are not limited to, users, workstations, servers, routers, firewalls, files, databases and data stores, software such as Web, file, and database servers, applications and services, sensors and actuators, data channels, and network packets.
[0040] A “network asset” (or “asset”) is a network entity that has a semantic description associated with it, such as a user's name and/or job title, a router's manufacturer, model number, and location, or a server's type (e.g. Web, database, or file) and a description of its content. Assets may be represented in many ways, including but not limited to key-value data structures, relational database records, and labeled property graphs.
[0041] A “physical data channel” is a combination of a physical medium (including free space as a medium for electromagnetic radiation) and a method for transmitting and receiving an information signal on that medium; example physical data channels include but are not limited to Ethernet, 802.11.x, Bluetooth, microwave, satellite link, and other wireless methods, optical fiber, broadband cable, telephone circuits, and data buses that are typically used for high-speed short-range communication within a computer or between modules in a rack enclosure; less commonly, acoustic waves through water can serve as a physical data channel, and a USB thumb drive that is physically carried from one computer to another can be considered a physical data channel.
[0042] A “neighbor entity” is a network entity whose behavior can be observed by a data collector; this typically means that the data collector and its neighbor entities are connected to the same physical data channel.
[0043] A “communication protocol” (or “protocol”) is a set of standard message formats and procedures that are used to exchange information between network entities across a physical data channel. Protocols may define, among other things, the physical representation of an item of information (for instance as an electronic waveform), a method for detecting and correcting errors in transmission, a method for ensuring reliable delivery of information, a method for assigning names or addresses to network entities, or a method for encrypting and decrypting information; this list is non-exhaustive.
[0044] A “domain” (or “enclave”) is a network or a region of a network that shares a common set of communication protocols. In some embodiments of the subject technology the network owners or managers may choose to define additional features that distinguish or set boundaries on domains, e.g. they may define the domain of network entities in a particular department or entities having a particular set of network addresses, but every such restricted domain will satisfy the definition of “domain”, i.e. it will share a common set of protocols. Where it serves the interests of clarity, such restricted domains will be called “subdomains.”
[0045] A “role identifier” refers to characteristics that define the role of network entities within the communications system. For example, a certain network entity may be associated within the network of an organization with a role identifier that defines them as an employee or a customer. Likewise, role identifiers can be used to identify a network entity as an information block, a server, an application, a service, or an infrastructure device, as examples.
[0046] The subject disclosure relates generally to determining quantitative measures of danger that a select network entity poses to the integrity and security of the network under observation (Trust/Risk).
[0047] In the algorithms that implement the subject technology, Trust values are normalized to take values between 0 and 1 inclusive, with 1 representing absolute Trust and 0 representing absolute Mistrust, i.e. certainty that the entity in question will behave badly or incorrectly. Threat is a non-negative number whose range is arbitrarily bounded above. The chosen upper bound serves as a scaling factor to make the resulting Trust/Risk values useful for reporting and visualization.
[0048] In some embodiments of the subject technology, Threat values are assigned to each network entity to represent how important the entity is to the integrity and security of the network. In other embodiments, Threat is assigned to pairs of network entities to account for the possibility that attacks on an entity T from an entity S may be more damaging than attacks on T from a different entity R—for instance, the attacker S may have more access privilege than R, or it may be less trusted than R based on other observations. If Threat values are assigned to pairs of entities rather than to each entity separately, the structure that matches entity pairs to Threat values is called a threat graph. A threat graph may be represented in numerous ways, including but not limited to (1) a connectivity matrix in which the rows represent potential attackers S and columns represent potential targets T, and the matrix entries are numbers representing the Threat that S poses to T; (2) a directed property graph in which the edge between S and T is labeled with the value of the Threat that S poses to T.
[0049] The equation to calculate Trust/Risk in a network is provided here. For a single entity attack on entity S:
Trust/Risk(S)=(1 Trust(S))*Threat(S) (1)
[0050] For multiple entity attacks, Trust/Risk is computed in two steps. First we compute Trust/Risk of individual entities using Equation (1). Then we adjust the Trust/Risk values according to links in the threat graph. If entities S and T are connected on the threat graph, let W(S,T) denote the weight of the link between S and T We adjust the Trust/Risk calculation for the entities using Equation (2).
Trust/Risk(S)=Trust/Risk(S)+Trust/Risk(T)*W(S,T)*α
Trust/Risk(T)=Trust/Risk(T)+Trust/Risk(S)*W(S,T)*α (2)
[0051] α=Attack Severity Parameter determined by threat model
[0052] Referring to
[0053] The data collector 100 may be embodied in one or more dedicated stand-alone devices, or as a software component of a larger system, or as a combination of stand-alone devices and components. The data acquired by the collector includes, but is not limited to, full or partial network packets (headers and/or payloads) and flows from the NUO 201, full or partial content of application and system log files from elements in the NUO 201, full or partial directory information from domain controllers and authentication services in the NUO 201, expert-provided metadata about the configuration and organization of the NUO 201. In some cases in which the NUO 201 is heterogeneous, containing network enclaves that employ diverse technologies and protocol families, the data collector 100 is specialized for the purpose of ingesting network traffic and user and device behavior within a network enclave (a set of network elements sharing a common suite of channel technologies and protocols), converting some or all of the ingested data to a standardized format, and forwarding it to one or more concentrator systems 120.
[0054] The atomic Trust/Risk analytics module 110 may be embodied in one or more dedicated stand-alone devices, as a software component colocated with a data collector 100, as a software component of a larger system, or as a combination of stand-alone devices and components.
[0055] In some embodiments, the atomic analytics module 110 will receive streamed data from one or more data collectors 100 and will perform range-based behavior detection on each stream separately. Examples of data streams can be one of a number of observable features, a non-exhaustive list of such data streams including: sensor data readings such as temperature; pressure; electric voltage or current; frequency; velocity; acceleration; pH; light or sound intensity; properties of network packets and flows including but not limited to packet arrival times; flow times and durations; packet and flow sizes; source and destination addresses and ports; system events including but not limited to logon attempts, authentication successes and failures, CPU, memory, and disk utilization, and user account, file, and directory creation, modification, and deletion; and application-specific performance and behavior indicators such as transaction rates, response times, resource consumption, and partial or complete content of log files. The atomic analytics module 110 will perform range-based detection in one of two ways; first by comparing the value of each data item with predefined or learned threshold values for that item type, and second, by using a statistical or machine learning algorithm to detect values that remain within the thresholds, but whose statistical distribution has changed significantly. These “drift alerts” can indicate a gradually failing component or a stealthy under-the-radar attack. In embodiments of the atomic analytics module that are hosted on systems with sufficient storage and processing capability, Machine Learning methods may be used to learn the appropriate threshold values. The output of the atomic analytics module 110 is a numeric Range-based Detection Trust/Risk value representing the estimated likelihood that the network element being observed is trustworthy.
[0056] The group analytics module 115 may be embodied in one or more stand-alone devices as a software component colocated with one or more data collectors 100, as a software component of a larger system, or as a combination of stand-alone devices and components. In some embodiments, the domain group analytics module receives streamed data from one or more data collectors 100 such that the streamed data represents observations of the same feature from multiple sources, such as temperature readings from multiple thermal sensors. The group analytics module 115 will perform consistency-based detection as described previously, aggregating observations until a specified count or time limit is reached, then determining the statistical distribution of the set of observations and assigning a numeric Consistency-based Detection Trust/Risk score to each observed network entity based on the element's statistical closeness to the group's distribution.
[0057] The data and report concentrator system 120 may be embodied in one or more stand-alone devices, or as a software component of a larger system that may contain one or more data collectors 100, one or more atomic analytics modules 110, and/or one or more group analytics modules 115. In some embodiments the concentrator system receives streamed data from one or more data collectors 100 such that the streamed data represents observations of different features, parses the received streamed data into formatted records, and writes the records to persistent storage in such form that the data can be retrieved for subsequent processing. The persistent storage mechanism may include a large-scale distributed file system such as Hadoop HDFS, a NoSQL database such as HBase, Accumulo, or MongoDB, a graph database such as Neo4j, or a combination of these methods. The concentrator system 120 will perform environmental observation detection as described herein, comparing information about different observed network features to detect incompatible and possibly deceptive reporting. In some embodiments the system will apply predefined condition-action rules to identify incompatible feature reports. In other embodiments, machine learning methods will be used to discover natural dependencies between different features, and observations that violate those learned dependencies will be flagged as anomalies. In both cases the detected anomalies will be used to reduce the Trust score of the affected network entities.
[0058] The cross-domain analytics (“CDA”) module 125 may be embodied in one or more stand-alone devices, or as a software component of a larger system containing a combination of the previously described components. Recall that a domain is a network or a region of a network that shares a common set of communication protocols. For example, the network in a corporation's headquarters is a domain that shares the TCP/IP protocol suite and probably some form of Ethernet and wireless media-access protocols. If that corporation is a manufacturer, its headquarters network is likely to have some connections to an Industrial Control System (ICS) network that uses proprietary protocols, and that ICS network is a separate domain. In addition, the corporation's facilities management department is very likely to use an Internet-of-Things (IOT) network to monitor and control building operations, using intelligent thermostats, room activity sensors and lighting controls, card-access and RFID readers, and so forth; this IOT network is another separate domain. These multi-domain networks-of-networks pose unique and difficult security challenges, because legacy security tools are designed to protect single domains and very few security analysts are familiar with the protocols, operations, and potential vulnerabilities in more than one domain. The purpose of the CDA module 125 is to alleviate this problem by combining observations from the different domains that make up the network as a whole, and using the Trust/Risk abstraction as a common language for identifying misbehaving network entities in each domain. By viewing all the connected domains in the network together, the CDA module can discover patterns of attack that cross domain boundaries, and that would not be visible to methods that only look at a single domain.
[0059] In some embodiments the CDA module 125 is a processing element of a Big Data storage and processing framework employing parallel distributed processing, in-memory distributed processing, or a combination of those methods. In some embodiments, the CDA module 125 receives data representing a variety of different observable network features from one or more concentrator systems 220 located in separate network domains. Observable network features can include an output generated by the underlying network entity, for example, a temperature sensor can generate observable features in the form of temperature readings. In other embodiments, the CDA module 125 receives historical data from the historian system 130, described in more below.
[0060] The historian system 130 may be embodied in one or more stand-alone devices, or as a software component of a larger system containing a combination of the previously described components. In some embodiments of the subject technology, the historian system 130 will ingest some combination of observed data from one or more data collectors 100 and results of the collective analytics systems 121 (including atomic analytics modules 110, group analytics modules 115, concentrator systems 120, and/or cross-domain analytics modules 125). The historian system 130 will index the ingested data by time and Trust/Risk values for efficient retrieval. The purpose of the historian system 130 is to support consistency-based detection and environmental detection over extended periods of time, ranging from weeks to years.
[0061] The response/remediation system 140 may be embodied in one or more stand-alone devices, or as a software component of a larger system containing a combination of the previously described components. In some embodiments of the subject technology, the response/remediation system 140 will ingest alarm notification messages from one or more of the analytics systems 121, and will apply condition-action rules to select and perform automated remedial actions such as modifying firewall rules to block certain network traffic. In other embodiments, the response/remediation system 140 will ingest alarm data as above and will employ standard network protocols to direct network traffic away from malicious or misbehaving network entities.
[0062] A Human Subject Matter Expert (“SME”) Interface module 150 (or “SME module”) may be embodied as a software component of a system that also contains one or more atomic analytics modules 110, group analytics modules 115, concentrator systems 120, cross-domain analytics modules 125, and/or response/remediation systems 140. The SME module 150 defines and implements an Application Programming Interface (“API”), which is a standard set of message formats and message-exchange protocols. The API enables a human operator to query and set parameters used by the listed analytics modules, such as numeric limits and thresholds for range-based detection, aggregation counts or timers for consistency-based detection, and feedback to machine learning algorithms for environmental detection. Output generated by the SME module 150 generally comes from one or more of the data collector 100, the analytics systems 121, or the response/remediation system 140.
[0063] Referring now to
[0064] After a tracking record for the network entity has been created or retrieved, one or more range-based detection algorithms are applied to the new observed data at step 160. As described earlier, the algorithms may compare the value of each data item with predefined or learned threshold values for that item type, or they may use a statistical algorithm based on Chebyshev's Inequality to detect values that remain within the thresholds, but whose statistical distribution has changed significantly. In some embodiments of the atomic analytics modules 110 that are hosted on systems with sufficient storage and processing capability, Machine Learning methods may be used to learn the appropriate threshold values. The output is a numeric Range-based Detection Trust/Risk value representing the estimated likelihood that the network entity being observed is trustworthy. At step 162, action is taken if the Trust/Risk value indicates that the entity has become a significant danger to the network's security or integrity. For example, tangible output, such as a warning or alarm, can be generated at step 162 to warn a user. In some circumstances, the action taken is to notify an operator by various means, such as sending email or a text message or updating a visual display element. In other cases the action taken may be an automated response action, such as quarantining the offending entity using firewall rules.
[0065] After each processing cycle, the atomic analytics module 110 performs a check to determine whether it is time to send one or more report packets to neighbor entities. The report packet contains the atomic analytics module's 110 self-report of its own behavior since the previous report, including data such as size and number of packets sent and received. It also contains a summary of the observations of neighbor entities that the atomic analytics module 110 has received and processed, and the atomic analytics module's 110 Trust/Risk scores for those neighbor entities. Report packets will be transmitted either periodically on expiration of a timer, or when available space for queueing/buffering of report data is exhausted. Therefore at step 164, the atomic analytics module 110 checks whether the timer has expired, or whether space is available space has been exhausted, as the case may be. If the timer has not expired and/or space has not be exhausted, the atomic analytics module loops back to step 154 to receive a new report packet. If the timer has expired, or available space has been exhausted, then at step 166 the atomics analytics module transmits the report packet and resets the timer before looping back to step 154 to receive or wait to receive the next report packet.
[0066] Referring now to
[0067] Immediately after step 270, the steps performed by the group analytics module 115 (i.e. steps 262-266) are similar to those of the atomic analytics module 110 of
[0068] Referring now to
[0069] The concentrator system 120 differs in that there is no consistency tests step 268 performed during the training loop. Instead, the training period loop of steps 354, 358, and 370 is completed before an environmental test and update is performed at step 374 (described in more detail in
[0070] Referring now to
[0071] Referring now to
[0072] Next, at step 418, parameters are defined to determine which behaviors by the network entity will be considered usual and which behaviors will be considered anomalous. In different examples, these parameters can be set by a user, or automatically generated by the analytics systems 121 based on the past experiences and results. At step 422, the various analytics systems 121 generate a Trust/Risk value for the network entity based on applying analytics to the collected data as discussed herein. For example, the analytics systems 121 can compare the actual behavior of the network entity to the plurality of parameters such that the network entities behavior can be classified as usual or anomalous. Further, the analytics systems 121 can look for differences between the actual behavior of the network entity to the self-report messages of the network entity to help determine the Trust/Risk score for the network entity. In different embodiments, the analytics systems 121 can also apply other methods to further refine Trust/Risk score, such as comparing the network entity's behavior with the behavior of neighbors, applying a Dynamic Forgetting Algorithm to discount behaviors which occurred further in the past, and comparing the behavior of the network entity to behaviors of similar network identities based on shared role identifiers. Notably, these are only examples of criteria that can be used in determining Trust/Risk, and ultimately, one or more of the criteria discussed herein can be applied to Trust/Risk for a given network entity.
[0073] After Trust/Risk has been determined at step 422, action can then be taken based on the Trust/Risk value of the network entity at step 424. For example, if the network entity is determined to pose a high risk to the network, a network operator can be alerted by a report, e-mail, alarm, or the like. Additionally, or alternatively, if a network entity is determined to pose a great danger to the network than action can automatically be taken, for example, by prohibiting access of the network entity to the network.
[0074] Finally, the method loop can be repeated for other network entities. At step 426, as the method loops back to be repeated, the Trust/Risk score that was calculated can be stored or used to update parameters and/or algorithms within the analytics systems 121. Particularly, when actual danger of a network entity is realized, this can be compared with past calculated Trust/Risk scores to determine effectiveness of the Trust/Risk methods in place. For example, the update analytics systems 121, the set parameters, and/or other criteria for determining Trust/Risk can be updated based on the effectiveness of past methods of determining Trust/Risk.
[0075] It will be appreciated by those of ordinary skill in the pertinent art that the functions of several elements may, in alternative embodiments, be carried out by fewer elements or a single element. Similarly, in some embodiments, any functional element may perform fewer, or different, operations than those described with respect to the illustrated embodiment. Also, functional elements (e.g., electronics, modules, networks, systems, alarms, sensors, and the like) shown as distinct for purposes of illustration may be incorporated within other functional elements in a particular implementation.
[0076] While the subject technology has been described with respect to preferred embodiments, those skilled in the art will readily appreciate that various changes and/or modifications can be made to the subject technology without departing from the spirit or scope of the subject technology. For example, each claim may depend from any or all claims in a multiple dependent manner even though such has not been originally claimed.