Patent classifications
G06F11/3055
CONTROL AND MONITORING OF A MACHINE ARRANGEMENT
A method for controlling and/or monitoring a machine arrangement having at least one machine, in particular at least one robot, with the aid of a processor arrangement having a plurality of processors each with at least one core. The method includes selecting, in particular temporarily selecting, a first available and at least one further available core on the proviso that these cores are implemented, in particular arranged, on different processors of the processor arrangement, in particular during operation of the machine arrangement and/or on the basis of an updated directory and/or on the basis, in particular as a result, of an ascertained need for redundant processing of process signals; processing process signals redundantly with the aid of these selected cores; and controlling and/or monitoring the machine arrangement on the basis of this processing.
IDENTIFYING AN ACTIVE ADMINISTRATION FUNCTION (ADMF) IN A LAWFUL INTERCEPTION DEPLOYMENT THAT UTILIZES A PLURALITY OF ADMFS
A method for identifying an active administration function (ADMF) in a lawful interception deployment that utilizes an ADMF set comprising a plurality of ADMFs can be implemented by a network element. The method can include exchanging lawful interception signaling with a first ADMF when the first ADMF is the active ADMF. The method can also include receiving an auditing request message from one of the plurality of ADMFs in the ADMF set and sending a ping request message to each ADMF in the ADMF set. The method can also include receiving a ping response message from a second ADMF among the plurality of ADMFs in the ADMF set and identifying the second ADMF as the active ADMF in response to receiving the ping response message. The method can also include exchanging second lawful interception signaling with the second ADMF when the second ADMF is the active ADMF.
Selective endpoint isolation for self-healing in a cache and memory coherent system
A cache and memory coherent system includes multiple processing chips each hosting a different subset of a shared memory space and one or more routing tables defining access routes between logical addresses of the shared memory space and endpoints that each correspond to a select one of the multiple processing chips. The system further includes a coherent mesh fabric that physically couples together each pair of the multiple processing chips, the coherent mesh fabric being configured to execute routing logic for updating the one or more routing tables responsive to identification of a first processing chip of the multiple processing chips hosting a defective hardware component, the update to the routing tables being effective to remove all access routes having endpoints corresponding to the first processing chip.
SYSTEM AND METHOD FOR AN ESTIMATION OF APPLICATION UPGRADES USING A DEVICE EMULATION SYSTEM OF A CUSTOMER ENVIRONMENT
A method for managing a client environment includes obtaining, by a device emulation orchestration engine in an emulation system, an upgrade estimation time request associated with an application upgrade, in response to the upgrade estimation time request: performing a device emulation container analysis to determine a client device that requires the application upgrade, wherein the client device executes in the client environment, initiating an upgrade emulation using a device emulation container corresponding to the client device, obtaining, from a device emulation agent executing in the device emulation container, an upgrade estimation, and providing the upgrade estimation to the application upgrade monitoring agent.
PROVIDING SYSTEM UPDATES IN AUTOMOTIVE CONTEXTS
A system includes a memory, a processor in communication with the memory, and an automotive operating system (OS) with a software update manager for an automobile. The system is configured to determine a new software update is available, monitor operating metrics of the automotive OS, and determine an installation time-window when each of the operating metrics collectively fall within respective predetermined thresholds. Responsive to determining that each of the operating metrics fall within respective predetermined thresholds, the system is configured to signal to the software update manager to start the installation once the automobile meets installation criteria. The installation criteria include at least (i) a first criteria that the automobile is stationary and (ii) a second criteria that the automotive OS is in an available state.
OUT-OF-BAND CUSTOM BASEBOARD MANAGEMENT CONTROLLER (BMC) FIRMWARE STACK MONITORING SYSTEM AND METHOD
An Information Handling System (IHS) includes multiple hardware devices, and a baseboard Management Controller (BMC) in communication with the plurality of hardware devices. The BMC includes executable instructions for monitoring a parameter of one or more of the hardware devices of the IHS when a custom BMC firmware stack is executed on the BMC. The instructions that monitor the parameter are separate and distinct from the instructions of the custom BMC firmware stack. When the parameter exceeds a specified threshold, the instructions are further executed to control the BMC to perform one or more operations to remediate the excessive parameter.
NONVOLATILE MEMORY WITH LATCH SCRAMBLE
An apparatus includes one or more control circuits configured to connect to a plurality of non-volatile memory cells arranged along word lines. The one or more control circuits are configured to receive a plurality of encoded portions of data to be programmed in non-volatile memory cells of a target word line, each encoded portion of data encoded according to an Error Correction Code (ECC) encoding scheme, and arrange the plurality of encoded portions of data in a plurality of rows of data latches corresponding to a plurality of logical pages such that each encoded portion of data is distributed across two or more rows of data latches. The one or more control circuits are also configured to program the distributed encoded portions of data from the plurality of rows of data latches into non-volatile memory cells along a target word line.
Server Classification Using Machine Learning Techniques
Methods, apparatus, and processor-readable storage media for server classification using machine learning techniques are provided herein. An example computer-implemented method includes obtaining, from at least one data source, data pertaining to server activity attributed to one or more servers; processing at least a portion of the obtained data using one or more rule-based analyses; selecting at least a particular machine learning classification algorithm from a set of multiple machine learning classification algorithms, based at least in part on results from the processing and one or more portions of the obtained data; classifying an activity level of at least a portion of the one or more servers by processing at least a portion of the obtained data using the selected machine learning classification algorithm; and performing at least one automated action based at least in part on results of the classifying.
METHOD AND SYSTEM FOR DETERMINE SYSTEM UPGRADE RECOMMENDATIONS
In general, embodiments of the invention relate to a method for generating upgrade recommendations. The method comprising obtaining telemetry data for a target entity, determining, using the telemetry data, at least one of a predicted upgrade time and a upgrade readiness factor for the target entity, generating an recommendation based on the at least one of the predicted upgrade time and the upgrade readiness factor for the target entity, and initiating a display of the recommendation on a graphical user interface of client.
DISTRIBUTED MULTI-LEVEL PROTECTION IN A HYPER-CONVERGED INFRASTRUCTURE
A storage system includes a plurality of storage nodes. Each storage node of the plurality of storage nodes includes a plurality of non-volatile memory modules. The storage system also includes a processor operatively coupled to the plurality of storage nodes, to perform a method. The method includes receiving incoming data. The method further includes storing the incoming data in a redundant array of independent drives (RAID) stripe in the data storage system. The RAID stripe includes groups of data shards. Each group of data shards and a respective group parity shard are stored across the plurality of nodes of the data storage system. A set of stripe parity shards are stored in a first storage node of the plurality of storage nodes.