Patent classifications
G06F11/2041
SYSTEMS AND METHODS TO FLUSH DATA IN PERSISTENT MEMORY REGION TO NON-VOLATILE MEMORY USING AUXILIARY PROCESSOR
A computing system that enables data stored in a persistent memory region to be preserved when a processor fails can include volatile memory comprising the persistent memory region, non-volatile memory, and a system on a chip (SoC). The SoC can include a main processor that is communicatively coupled to both the volatile memory and the non-volatile memory. The SoC can also include an auxiliary processor that is communicatively coupled to both the volatile memory and the non-volatile memory. The SoC can also include instructions that are executable by the auxiliary processor to cause the data in the persistent memory region of the volatile memory to be transferred to the non-volatile memory in response to a failure of the main processor.
PROCESSING COMMUNICATION SESSIONS
A virtualized computing environment of a telecommunications network comprises a cluster of virtual machines with a one-to-one ratio of active and backup virtual machines. One or more additional clusters of virtual machines have a N-to-K ratio of active and backup virtual machines where N>K. The backup virtual machines are configured to provide failover capacity for processing communications sessions in an event of a failure of one of the active virtual machines. A cluster redundancy capability indicates the ratio of the active and backup virtual machines for that cluster. A predetermined type associated with a requested communication session is determined. A cluster having a cluster redundancy capability corresponding to the predetermined type is selected. Data for the requested communication session is sent to an active virtual machine in the selected cluster.
NETWORK VIRTUALIZATION POLICY MANAGEMENT SYSTEM
Concepts and technologies are disclosed herein for providing a network virtualization policy management system. An event relating to a service can be detected. A first policy that defines allocation of hardware resources to host the virtual network functions can be obtained, as can a second policy that defines deployment of the virtual network functions to the hardware resources. The hardware resources can be allocated based upon the first policy and the virtual network functions can be deployed to the hardware resources based upon the second policy.
Dynamic feedback technique for improving disaster recovery replication performance
A dynamic feedback technique improves data replication performance by balancing rates of data retrieval and data transmission of a fragmented virtual disk replicated between nodes of clusters on a local site and a remote site of a disaster recovery environment. Each node is embodied as a physical computer with hardware resources, such as processor, memory, network and storage resources, which are virtualized to provide support for one or more user virtual machines executing on the node. The storage resources include storage devices of an extent store, whereas the network includes a wide area network connecting the local and remote sites. The dynamic feedback technique employs a virtual memory buffer configured to balance the data storage retrieval and network transmission rates at a source of replication based on bandwidth demands of the extent store and network throughput as manifested by an available free space (i.e., emptiness) of the virtual buffer.
ENCRYPTION FOR A DISTRIBUTED FILESYSTEM
A computing device comprising a frontend and a backend is operably coupled to a plurality of storage devices. The backend comprises a plurality of buckets. Each bucket is operable to build a failure-protected stripe that spans two or more of the plurality of the storage devices. The frontend is operable to encrypt data as it enters the plurality of storage devices and decrypt data as it leaves the plurality of storage devices.
LOCKSTEP PROCESSOR RECOVERY FOR VEHICLE APPLICATIONS
A fault tolerant processing environment wherein multiple processors are configured as worker nodes and redundant nodes, with a failed worker node replaced programmatically by a manager node. Each of the processing nodes may include a processor and memory associated with the processor and communicate with other processing nodes using a network. A manager node creates a message passing interface (MPI) communication group having worker nodes and redundant nodes, instructs the worker nodes to perform lockstep processing of tasks for an application, and monitors execution of the tasks. If a node fails, the manager node creates a replacement worker node from one of the redundant processing nodes and creates a new communications group. It then instructs those nodes in the new communications group to resume processing based on the application state and checkpoint backup data.
Workgroup hierarchical core structures for building real-time workgroup systems
A workgroup-computing-entity-based fail-safe/evolvable hardware core structure is disclosed which includes a 3-hierarchical-level 6-workgroup-Basic-Building-Block (6-wBBB) created to supplant the node-computing-entity-based non-fail-safe/limited evolvable von-Neumann core structure of 3-hierarchical-level 3-node-BBB, (i.e., base-level IO-devices/mid-level main memory/top-level CPU) and all the first-time fail-safe workgroup systems can be subsequently generated in the second period along the workgroup-computing evolutionary timeline. Furthermore, based on the first 6-wBBB evolvable architecture, the workgroup evolutionary processes can go up to 7 generations in creating all the necessary workgroup-computing entity-based hardware core structures, so that all the real-time intelligent workgroup-computing systems can be generated in the third period along the workgroup-computing evolutionary timeline.
Maintainable distributed fail-safe real-time computer system
A distributed maintainable real-time computer system is provided, wherein the real-time computer system includes at least two central computers and one, two or a plurality of peripheral computers. The central computers have access to a sparse global time, have identical hardware and identical software, but different startup data, wherein each functional central computer periodically sends time-triggered multi-cast life-sign messages to the other central computers according to a time plan a priori defined in its startup data, and wherein the peripheral computers (151, 152, 153, 154) can exchange messages (135) with the central computers (110, 120), and wherein at all times one central computer is in the active state and the other central computers are in the non-active state, and wherein after the apparent absence of a life-sign message of the active central computer expected at a planned reception time, that non-active functioning central computer which has the shortest start-up timeout takes over the function of the active central computer, and wherein each central computer (110, 120; 200) consists of three independent subsystems, an application computer (210), a storage medium having the startup data (230) characteristic of the central computer (200) and an internal monitor (220), wherein the internal monitor (220) periodically checks the correct functioning of the application computer (210), and wherein upon detection of an error the monitor (220) initiates a hardware reset and a restart of the application computer (210), and wherein preferably the active central computer initiates a maintenance action after an apparent absence of the life-sign messages expected at the planned reception times from a non-active central computer, which action can lead to the repair or replacement of a permanently failed central computer.
EVENT-DRIVEN SYSTEM FAILOVER AND FAILBACK
A system determines that a primary event processor, included in a primary data center, is associated with a failure. The primary event processor is included in the primary data center and configured to process first events stored in a main event store of the primary data center. The system identifies a secondary event processor, in a secondary data center, that is to process one or more first events based on the failure. The primary event processor and the secondary event processor are configured to process a same type of event. The system causes, based on a configuration associated with the primary or secondary event processor, the one or more first events to be retrieved from one of the main event store or a replica event store. The replica event store is included in the secondary data center and mirrors the main event store of the primary data center.
VNFM handling of faults in virtual network function components
An example operation may include a system, comprising one or more of: receiving a status failure notification for a VNFCI, retrieving a peer VNFCI admin state and a peer VNFCI operational state, taking no action when one or more of: the peer VNFCI admin state is not online, the peer VNFCI is not reachable, and the peer VNFCI operational state is active, retrieving current issues reported on resources associated with the peer VNFCI when one or more of: the peer VNFCI admin state is online, the peer VNFCI is reachable, and the peer VNFCI operational state is not active, sending a state change request message with an active state to the peer VNFCI when the current issues do not exist, and starting a retry timer for the peer VNFCI.