Patent classifications
H04L67/1048
FAILOVER PORT FORWARDING BETWEEN PEER STORAGE NODES
Systems and methods for failover port forwarding between peer storage nodes are described. Storage nodes may include separate data ports for host network communication and peer network communication. In the event of host port failure, host nodes may be configured to send failover storage requests to a different storage node and that storage node may forward the failover storage request through the peer ports to reach the target storage node.
Service mesh management
A system, method, and computer readable medium for managing service mesh for container instances. The method includes generating a service mesh that includes a plurality of computing resources. The method further includes obtaining, from an instantiated computing resource, a request to associate the computing resource with another computing resource in the service mesh, where the request comprising a set of constraints that allows the other computing resource to be identified. Based on the set of constraints, the computing resources in the service mesh are connected in which the computing resources communicate with each other through a dedicated proxy.
Decentralized random number generator
The current disclosure is directed towards efficiently generating random sequences on a large-scale peer-to-peer network. In one example, the disclosure provides for selecting a first node based on a block generation order, where the first node is selected to generate a current block, adding a first signature share of the first node to the current block, adding at least a second signature share from a previously selected node to the current block, generating a random sequence based on the first signature share and the second signature share, adding the random sequence to the current block, and publishing the current block to a blockchain maintained by a node pool. In this way, a random sequence may be generated on-chain, with linear messaging complexity, without relying on a single trusted party/apparatus, which may thereby decrease a probability of any single party controlling the random sequence produced.
DECOMMISSIONING, RE-COMMISSIONING, AND COMMISSIONING NEW METADATA NODES IN A WORKING DISTRIBUTED DATA STORAGE SYSTEM
In a running distributed data storage system that actively processes I/Os, metadata nodes are commissioned and decommissioned without taking down the storage system and without introducing interruptions to metadata or payload data I/O. The inflow of reads and writes continues without interruption even while new metadata nodes are in the process of being added and/or removed and the strong consistency of the system is guaranteed. Commissioning and decommissioning nodes within the running system enables streamlined replacement of permanently failed nodes and advantageously enables the system to adapt elastically to workload changes. An illustrative distributed barrier logic (the “view change barrier”) controls a multi-state process that controls a coordinated step-wise progression of the metadata nodes from an old view to a new normal. Rules for I/O handling govern each state until the state machine loop has been traversed and the system reaches its new normal.
COMMISSIONING AND DECOMMISSIONING METADATA NODES IN A RUNNING DISTRIBUTED DATA STORAGE SYSTEM
In a running distributed data storage system that actively processes I/Os, metadata nodes are commissioned and decommissioned without taking down the storage system and without introducing interruptions to metadata or payload data I/O. The inflow of reads and writes continues without interruption even while new metadata nodes are in the process of being added and/or removed and the strong consistency of the system is guaranteed. Commissioning and decommissioning nodes within the running system enables streamlined replacement of permanently failed nodes and advantageously enables the system to adapt elastically to workload changes. An illustrative distributed barrier logic (the “view change barrier”) controls a multi-state process that controls a coordinated step-wise progression of the metadata nodes from an old view to a new normal. Rules for I/O handling govern each state until the state machine loop has been traversed and the system reaches its new normal.
CONSENSUS NODE CHANGING METHOD AND RELATED APPARATUS BASED ON HONEY BADGER BYZANTINE FAULT TOLERANCE CONSENSUS MECHANISM
Embodiments of this specification provide a consensus node changing method and apparatus based on a Honey Badger Byzantine Fault Tolerance (BFT) consensus mechanism. The method includes: when receiving a transaction for changing a blockchain's consensus node, executing, by a consensus node of the blockchain, the transaction to trigger a smart contract to update a consensus node configuration list of the blockchain, where the consensus node configuration list includes serial numbers allocated to consensus nodes based on a serial number allocation rule specified by the smart contract; associating, by the consensus node based on serial numbers of consensus nodes in the updated consensus node configuration list, another consensus node of the blockchain with at least two state machines configured in the consensus node.
Wireless organization of electrical devices by sensor manipulation
A system can include a first electrical device having a first sensor device, where the first sensor device is configured to measure a first parameter used in operating the first electrical device, where the first sensor device is further configured to detect a first condition that is unrelated to operating the first electrical device, where the first condition is created by a trigger device controlled by a user, where the first sensor device, upon detecting the first condition, broadcasts a first communication that includes a first identification of the first sensor device. The system can also include a gateway communicably coupled to the first electrical device, where the gateway receives the first communication from the first electrical device, where the gateway assigns the first electrical device to a first group based on the first identification of the first sensor device.
DISTRIBUTED DYNAMIC ARCHITECTURE FOR ERROR CORRECTION
Various systems and methods may be used to implement a software defined industrial system. For example, an orchestrated system of distributed nodes may run an application, including modules implemented on the distributed nodes. The orchestrated system may include an orchestration server, a first node executing a first module, and a second node executing a second module. In response to the second node failing, the second module may be redeployed to a replacement node (e.g., the first node or a different node). The replacement mode may be determined by the first node or another node, for example based on connections to or from the second node.
COMMUNICATION TERMINAL DEVICE, INFORMATION COMMUNICATION SYSTEM, STORAGE MEDIUM, AND INFORMATION COMMUNICATION METHOD
A communication terminal device has circuitry configured to: judge, based on received information, whether or not source of the received information is intra-group communication destination; monitor whether or not the intra-group communication destination is the master unit based on the judgement; acquire the judging criterion information of the intra-group communication destination detected as the master unit; and switch, while the self device is operating by a master unit operation mode and when the other master unit is detected by monitoring, a master unit operation mode of the self device to a slave unit operation mode based on the judging criterion information of the self device and the judging criterion information of the other master unit.
METHOD AND APPARATUS FOR MONITORING GLOBAL FAILURE IN VIRTUAL GATEWAY CLUSTER
Embodiments of the present disclosure relate to a method and apparatus for monitoring a global failure in a virtual gateway cluster. The method may include: setting, in response to receiving a network probe setting instruction, a network probe in a target monitoring scenario, the network probe including a sending end probe and a receiving end probe; collecting a response data packet sent by the sending end probe to the receiving end probe via a target virtual gateway cluster; and analyzing the response data packet based on a preset monitoring index, to determine whether a global failure occurs in the target virtual gateway cluster.