Patent classifications
G06F9/5044
SYSTEM AND METHOD FOR TRANSFERRING DATA FROM NON-VOLATILE MEMORY TO A PROCESS ACCELERATOR
Methods and apparatuses for transferring data from non-volatile memory to process accelerator memory are disclosed. In one embodiment, a process accelerator issues a transfer request for a resource at a host file system. The process accelerator receives, responsive to the transfer request, data from the host file system, wherein the data corresponds to the resource and the process accelerator receives the data directly from the host file system bypassing staging memory of the host. The process accelerator manipulates the data to obtain the resource. Thus, the process accelerator may obtain the resource directly from the host file system to minimize the number of transfers of the data.
PLATFORM FRAMEWORK ORCHESTRATION AND DISCOVERY
Embodiments of systems and methods for platform framework orchestration and discovery are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: receive, by a service within a platform framework through an Application Programming Interface (API), a discovery request; in response to the discovery request, convey an inquiry for capability information from the service to a participant registered with the platform framework through the API; receive, by the service from the participant through the API, the capability information; and fulfill, by the service through the API, the discovery request using at least a portion of the capability information.
PLATFORM FRAMEWORK CONFIGURATION STATE MANAGEMENT
Embodiments of systems and methods for platform framework configuration state management are described. A platform framework of an IHS (Information Handling System) generates a resource dependency graph based on registrations of a plurality of platform framework participants, wherein the registrations of the participants specify use of resources accessed via the platform framework. A change in context of operation of the IHS is determined. Based on the context change, a change is determined in the availability of resources accessed via the platform framework. Based on the resource dependency graph, registered participants are identified that are affected by the change in platform framework resource availability. The affected participants are notified of the change in platform framework resource availability. In some embodiments, the registrations of the participants may specify a communication handle for notifying the participant of changes in the resource dependency graph.
INTELLIGENT RESOURCE MANAGEMENT
A system and method for distributing resources in a computing system is disclosed. The resources include hardware components in a hardware pool, a management infrastructure, and an application. A telemetry system is coupled to the resources to collect operational data from the operation of the resources. A data analytics system is coupled to the telemetry subsystem to predict a future operational data value based on the collected operational data. A policy engine is coupled to the data analytics system to determine a configuration to allocate the resources based on the future operational data value.
POSITIONING OF EDGE COMPUTING DEVICES
A processor may receive user data associated with one or more locations of a user in an environment. The processor may receive edge computing data associated with utilization of edge computing resources by the user. The processor may analyze the edge computing data to associate a context with an edge computing resource need. The processor may analyze the user data to associate a context with a location of the user within the environment. The processor may determine a first location of the user in the environment at a first time. The processor may predict a first edge computing need of the user in the first location. The processor may determine an arrangement of one or more edge computing devices configured to meet the first edge computing need of the user at the first time.
SYSTEM AND METHOD FOR ACCELERATOR-CENTRIC WORKLOAD PLACEMENT
An infrastructure manager for placing workloads for performance across available infrastructure including on-demand infrastructure and dedicated infrastructure includes a storage device for storing an available infrastructure repository and a processor. The processor obtains a workload placement request for a workload of the workloads; makes a determination that the workload has a special purpose hardware requirement; in response to the determination: identifies, using the available infrastructure repository, potential placement locations in the available infrastructure for the workload that each meet the special purpose hardware requirement; and places the workload at one of the potential placement locations.
Content-based distribution and execution of analytics applications on distributed datasets
Methods are provided. A method includes announcing to a network meta information describing each of a plurality of distributed data sources. The method further includes propagating the meta information amongst routing elements in the network. The method also includes inserting into the network a description of distributed datasets that match a set of requirements of the analytics task. The method additionally includes delivering, by the routing elements, a copy of the analytics task to locations of respective ones of the plurality of distributed data sources that include the distributed datasets that match the set of requirements of the analytics task.
Processing system, update server and method for updating a processing system
According to various examples, a processing system is described comprising a plurality of hardware circuit components, each hardware circuit component configured to provide a processing functionality, a data path leading through the plurality of hardware circuit components, at least one programmable circuit and a controller configured to select one of the hardware circuit components to be replaced by the at least one programmable circuit, program the programmable circuit to provide the processing functionality provided by the selected hardware circuit component and configure the data path to lead through the programmable circuit instead of the selected hardware circuit component.
Methods, systems, articles of manufacture and apparatus to improve resource utilization for binary tree structures
Methods, apparatus, systems and articles of manufacture are disclosed to improve resource utilization for binary tree structures. An example apparatus to improve resource utilization for field programmable gate array (FPGA) resources includes a computation determiner to identify a computation capability value associated with the FPGA resources, a k-ary tree builder to build a first k-ary tree having a number of k-ary nodes equal to the computation capability value, and an FPGA memory controller to initiate collision computation by transferring the first k-ary tree to a first memory of the FPGA resources.
Simultaneous cross-device application platform
In non-limiting examples of the present disclosure, systems, methods and devices for providing a unified cross-platform experience are provided. A connection between a first device and a second device may be established, wherein the first device operates on a first platform and the second device operates on a second platform. A plurality of executable actions that are specific to the second device may be identified by the first device. Execution of at least one of the plurality of executable actions by the second device may be requested by the an application executed on the first device. Information obtained via execution of the at least one executable action may be received by the first device and the first device may present and/or display that information.