H04L67/566

Coding and incentive-based mechanism for distributed training of machine learning in IoT

A coding and Incentive-based distributed computing management system includes: a parameter server that publishes a gradient descent computation task to update parameters of distributed computing, sends the published task to end devices, and groups end devices into clusters by receiving related information from the plurality of end devices, determines the number of stragglers in each cluster and sends the number of stragglers to the end devices, along with an encoding scheme for gradient descent computation, and distributes incentive to the end devices based on encoded results and the end devices that receive the published task from the parameter server, sends an intention to participate in the published task and related information to the parameter server, determines CPU-cycle frequencies by receiving information on the grouping of the end devices and related information from the parameter server, and performs encoding for gradient descent computation and send a computed gradient to the parameter server.

Intelligent edge computing platform with machine learning capability

An edge computing platform with machine learning capability is provided between a local network with a plurality of sensors and a remote network. A machine learning model is created and trained in the remote network using aggregated sensor data and deployed to the edge platform. Before being deployed, the model is edge-converted (“edge-ified”) to run optimally with the constrained resources of the edge device and with the same or better level of accuracy. The “edge-ified” model is adapted to operate on continuous streams of sensor data in real-time and produce inferences. The inferences can be used to determine actions to take in the local network without communication to the remote network. A closed-loop arrangement between the edge platform and remote network provides for periodically evaluating and iteratively updating the edge-based model.

DELEGATED AUTHORIZATION VIA SINGLE ACCESS TOKEN

An information handling system may include a processor; a memory; and a management controller. The information handling system may be configured to: receive, at the management controller and from a client information handling system, a request for management associated with the management controller; determine an audience claim of a token associated with the request, wherein the audience claim comprises a group identifier, and wherein the group identifier is associated with a plurality of management controllers; and in response to a determination that the management controller is one of the plurality of management controllers with which the group identifier is associated, cause the management controller to service the request.

Systems and Methods for Optimizing Distributed Computing Systems Including Server Architectures and Client Drivers

Systems and methods for optimizing distributed computing systems are disclosed, such as for processing raw data from data sources (e.g., structured, semi-structured, key-value paired, etc.) in applications of big data. A process for utilizing multiple processing cores for data processing can include receiving raw input data and a first portion of digested input data from a data source client through an input/output bus at a first processor core, receiving, from the first processor core, the raw input data and first portion of digested input data by a second processor core, digesting the received raw input data by the second processor core to create a second portion of digested input data, receiving the second portion of digested input data by the first processor core, and writing, by the first processor core, the first portion of digested input data and the second portion of digested input data to a storage medium.

SYSTEM FOR MANAGING AN INSTRUCTURE WITH SECURITY
20230030988 · 2023-02-02 ·

A system for managing an infrastructure includes extraction engine is in communication with a managed infrastructure that includes physical hardware. A signalizer engine includes one or more of an NMF engine (Non-negative matrix factorization), a k-means clustering engine (a method of vector quantization), and a topology proximity engine. The signalizer engine determines one or more common characteristics of events and produces clusters of events relating to the failure or errors in the infrastructure. The signalizer engine uses graph coordinates and optionally a subset of attributes assigned to each event to generate one or more clusters to bring together events whose characteristics are similar. One or more interactive displays provide a collaborative interface coupled to the extraction and the signalizer engine with a collaborative interface (UI) for decomposing events from the infrastructure. The events are converted into words and subsets to group the events into clusters that relate to security of the managed infrastructure. In response to grouping the events physical changes are made to at least a portion of the physical hardware. In response to production of the clusters security of the managed infrastructure is maintained.

Unified counting platform

The disclosed embodiments provide a system for managing a counting use case. During operation, the system matches, to a first counting use case, a first parameter of a first unified request over an application programming interface (API) provided by a unified counting platform. Next, the system identifies, based on metadata for configuring the first counting use case in the unified counting platform, a first counting solution assigned to the first counting use case. The system then formats a first set of parameters in the first unified request into a first adapted request that is transmitted to the first counting solution. The system also formats a first response to the first adapted request from the first counting solution into a first unified response to the first unified request. Finally, the system transmits the first unified response to a first source of the first unified request.

INTELLIGENT VIBRATION DIGITAL TWIN SYSTEMS AND METHODS FOR INDUSTRIAL ENVIRONMENTS

A platform for updating one or more properties of one or more digital twins including receiving a request for one or more digital twins; retrieving the one or more digital twins required to fulfill the request from a digital twin datastore; retrieving one or more dynamic models corresponding to one or more properties that are depicted in the one or more digital twins indicated by the request; selecting data sources from a set of available data sources based on the one or more inputs of the one or more dynamic models; obtaining data from selected data sources; determining one or more outputs using the retrieved data as one or more inputs to the one or more dynamic models; and updating the one or more properties of the one or more digital twins based on the one or more outputs of the one or more dynamic models.

Dynamic determination of media consumption
11488042 · 2022-11-01 · ·

Disclosed are various embodiments for dynamically determining media consumption of a user. A user may perform at least one of a plurality of consumption indication events for a media item. The consumption indication events may include submitting a rating of the media item, submitting a review of the media item, indicating a present consumption of the media item, indicating a past consumption of the media item, etc. It may be determined that the user has consumed the media item in response to determining that the user has performed at least one of the consumption indication events for the media item.

PRECISION TIME PROTOCOL WITH MULTI-CHASSIS LINK AGGREGATION GROUPS

The precision time protocol (PTP) runs on the peer switches in an MLAG domain. PTP messages received by one peer switch on an MLAG interface is selectively peer-forwarded to the other peer switch on the same MLAG interface in order to coordinate a synchronization session with a PTP node. The peer-forwarded messages inform one peer switch to be an active peer and the other peer switch to be an inactive peer so that timestamped messages during the synchronization session are exchanged only between the PTP node the active peer, and hence take the same data path.

SYSTEMS AND METHODS FOR BLOCKCHAIN NETWORK CONGESTION-ADAPTIVE DIGITAL ASSET EVENT HANDLING
20230088674 · 2023-03-23 · ·

A computer-implemented method and system for blockchain network congestion-adaptive handling of events relating to digital assets, including creation and transfer operations. A congestion metric is measured to determine current congestion of the blockchain network. If below a first threshold level, then digital asset request are implemented using blockchain transactions at layer 1 as they are received. If the metric is above the first threshold level, then the received requests are queued until a queue trigger is detected, whereupon the queued requests are processed at layer 2 and a batch blockchain transaction is used to implement the two or more requests by recording the updated state on chain. When the metric falls below a second threshold, the process reverts to using layer 1 blockchain transactions instead of queueing requests for layer 2 batch processing.