Patent classifications
G06F11/3457
SYSTEM AND METHOD FOR USING MACHINE LEARNING MODELS WITH SENSORS TO INTERPRET AND STIMULATE NEURAL PHYSIOLOGY
In one aspect, a method includes loading, at a local computing device and at a remote computing device, a machine learning model comprising layers; measuring, at the local computing device, parameters for each of the; determining, based on the parameters, a first set of the one or more layers of the machine learning model to execute by the local computing device and a second set of the layers of the machine learning model to execute at the remote computing device; receiving, from a sensor, a first output, and subsequently inputting the first output into the first set of the layers of the machine learning model executed by the local computing device; and receiving, from the first set of the layers of the machine learning model, a second output, and subsequently transmitting the second output to the remote computing device.
SEARCH AND RECOMMENDATION ENGINE ALLOWING RECOMMENDATION-AWARE PLACEMENT OF DATA ASSETS TO MINIMIZE LATENCY
A search engine responding to a user query to find relevant data assets in a federation business data lake (FBDL) system based on interactions of known users interacting with data assets in the FBDL system. Data assets are optimally placed for minimal latency or maximal load. Data asset recommendations and past data asset access information are input as features to a time-series model for predicting future data access patterns. An expected latency and load risk is then determined and scored by a weighted mean of these values, and placement optimization is simulated using an optimization method (e.g., genetic algorithm). Using the scoring and simulation, a data asset placement engine is then used to move the locations of the data assets to minimize latency and/or to minimize maximal load.
Adapting cache processing using phase libraries and real time simulators
A method, a computing device, and a non-transitory machine-readable medium for modifying cache settings in the array cache are provided. Cache settings are set in an array cache, such that the array cache caches data in an input/output (I/O) stream based on the cache settings. Multiple cache simulators simulate the caching the data from the I/O stream in the array cache using different cache settings in parallel with the array cache. The cache settings in the array cache are replaced with the cache settings from one of the cache simulators based on the determination that the cache simulators increase effectiveness of caching data in the array cache.
Supervised graph-based model for program failure cause prediction using program log files
Described are computer-implementable method, system and computer-readable storage medium for supervised graph-based model for prediction of program failure using program log files. Using log file from a running program application, a log file graph is created. Node-level labels are adding to the log file graph, where the labels include an indication of first failure. The node-level labeled log file graph is processed by a graph neural network (GNN) and predictions are provided as to program cause of failure or first failure indication of other log file graphs based on the GNN processed node-level labeled log file graph.
SYSTEM FOR REPRESENTING ATTRIBUTES IN A TRANSPORTATION SYSTEM DIGITAL TWIN
A system for representing attributes in a transportation system digital twin includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation-system digital twin including real-world-element digital twins embedded therein. The transportation system digital twin corresponds to a transportation system. Each real-world-element digital twin provides a digital twin of a respective real-world element that is disposed within the transportation system. The real-world-element digital twins include mobile-element digital twins. Each mobile-element digital twin provides a digital twin of a respective mobile element within the real-world elements. The one or more processors are configured to, for each mobile element, determine, in response to an occurrence of a triggering condition, a position of the mobile element, and update, in response to determining the position of the mobile element, the mobile-element digital twin corresponding to the mobile element to reflect the position of the mobile element.
ADAPTIVE FEEDBACK TIMING SYSTEM
An adaptive feedback timing system and method includes receiving, by a performance observation system, monitoring data associated with electronically monitoring a lesson by a variable feedback teaching device. Adaptive feedback timing also includes receiving, by the performance observation system, error detection data associated with the variable feedback teaching device automatically detecting an error made by a student during the lesson. After receiving the error detection data, a feedback pattern is automatically selected based on a performance history criterion. Feedback data is then communicated to the variable feedback teaching device for presentation to the student according to the automatically selected feedback pattern.
SYSTEM AND METHOD FOR PROVIDING EMULATION AS A SERVICE FRAMEWORK FOR COMMUNICATION NETWORKS
Emulation has become a critical method for the initial phase of verification and validation processes. However, achieving interoperability between emulation systems and ensuring credibility of results currently require significant efforts. This disclosure relates to a system and method for providing an emulation as a service (EaaS) framework for communication networks. The EaaS framework provides discoverable services that are readily available on-demand and deliver a choice of applications in a flexible and adaptive manner. The EaaS framework is used for discovery, composition, execution, and management of emulation services. The EaaS framework defines user-facing capabilities (front-end) and underlying core functional infrastructure (back-end). The front end provides access to a large variety of emulation capabilities from which the user is able to select the services and track the experiences.
HIGH VOLUME DATA LOGGING FROM HARDWARE
A hardware system for simulating a network physical layer for communication channels. The hardware system includes a plurality of hardware processors configurable to model a network physical layer and communication channels. The hardware system further includes a multi-point data switch configured to be coupled to various data log points associated with the plurality of hardware processors. The hardware system further includes a RAM coupled to the multi-point data switch, where the RAM is configured to store log data provided by the multi-point data switch as software defined data structures
Automatic control loop decision variation
A method includes defining a plurality of variables to modify in a control loop; collecting first data using a first variable of the plurality of variables while executing the control loop, generating a first result based on the collecting first data step, substituting a second variable of the plurality of variables for the first variable, collecting second data using the second variable while executing the control loop, generating a second result based on the collecting second data step, comparing the first result and the second result; and taking an action based on the comparing step.
Pattern-recognition enabled autonomous configuration optimization for data centers
A model-based approach to determining an optimal configuration for a data center may use an environmental chamber to characterize the performance of various data center configurations at different combinations of temperature and altitude. Telemetry data may be recorded from different configurations as they execute a stress workload at each temperature/altitude combination, and the telemetry data may be used to train a corresponding library of models. When a new data center is being configured, the temperature/altitude of the new data center may be used to select a pre-trained model from a similar temperature/altitude. Performance of the current configuration can be compared to the performance of the model, and if the model performs better, a new configuration based on the model may be used as an optimal configuration for the data center.