Patent classifications
G06F9/544
Aggregating data to form generalized profiles based on archived event data and compatible distributed data files with which to integrate data across multiple data streams
Various embodiments relate generally to data science and data analysis, computer software and systems, to provide a platform to facilitate updating compatible distributed data files, among other things, and, more specifically, to a computing and data platform that implements logic to facilitate correlation of event data via analysis of electronic messages, including executable instructions and content, etc., via a cross-stream data processor application configured to, for example, update or modify one or more compatible distributed data files automatically. In some examples, a method may include activating APIs to receive via a message throughput data pipe different data streams, extracting features from data using the APIs, identifying event-related data across data sources, correlating the event-related data to form data representing an even, classifying event-related data into a state classification, determining compatible data at data sources, identifying compatible data, and transmitting integration data to integrate with a data source.
PROGRAMMABLE DEVICE, HIERARCHICAL PARALLEL MACHINES, AND METHODS FOR PROVIDING STATE INFORMATION
Programmable devices, hierarchical parallel machines and methods for providing state information are described. In one such programmable device, programmable elements are provided. The programmable elements are configured to implement one or more finite state machines. The programmable elements are configured to receive an N-digit input and provide a M-digit output as a function of the N-digit input. The M-digit output includes state information from less than all of the programmable elements. Other programmable devices, hierarchical parallel machines and methods are also disclosed.
SYSTEMS AND METHODS FOR CONTENT SHARING THROUGH EXTERNAL SYSTEMS
Disclosed are mechanisms for sharing content through content consumption systems. A sharing module publishes content in a share and metadata associated therewith to a content consumption system external to a managed repository. The share represents a folder or directory in the managed repository. The publication can be made through application programming interface (API) calls handled by a first sharing module API, a repository API, a second sharing module API, and a content consumption system API. These APIs together provide a one-to-one mapping of communications protocols used by the managed repository and the external system. The share in the managed repository and the share published to the content consumption system are synced and any conflict between the two is detected and resolved. The shared content can be repatriated back to the managed repository and the shared version deleted from the content consumption system.
EFFECTIVE AND SCALABLE BUILDING AND PROBING OF HASH TABLES USING MULTIPLE GPUS
Described approaches provide for effectively and scalably using multiple GPUs to build and probe hash tables and materialize results of probes. Random memory accesses by the GPUs to build and/or probe a hash table may be distributed across GPUs and executed concurrently using global location identifiers. A global location identifier may be computed from data of an entry and identify a global location for an insertion and/or probe using the entry. The global location identifier may be used by a GPU to determine whether to perform an insertion or probe using an entry and/or where the insertion or probe is to be performed. To coordinate GPUs in materializing results of probing a hash table a global offset to the global output buffer may be maintained in memory accessible to each of the GPUs or the GPUs may compute global offsets using an exclusive sum of the local output buffer sizes.
Card device having applets and transfer of APDUS to applets
The invention produces a card device having functional applets and an AID applet, as well as a relaying table that forwards commands addressed to the AID applet to functional applets.
Memory network processor
A multi-processor system with processing elements, interspersed memory, and primary and secondary interconnection networks optimized for high performance and low power dissipation is disclosed. In the secondary network multiple message routing nodes are arranged in an interspersed fashion with multiple processors. A given message routing node may receive messages from other message nodes, and relay the received messages to destination message routing nodes using relative offsets included in the messages. The relative offset may specify a number of message nodes from the message node that originated a message to a destination message node.
AUTOMATED GENERATION OF AGENT CONFIGURATIONS FOR REINFORCEMENT LEARNING
This document relates to reinforcement learning. One example includes a system having a processor and a storage medium. The storage medium can store instructions which, when executed by the processor, cause the system to identify a selected agent configuration having a corresponding selected reward function based at least on predicted performance of a plurality of alternative agent configurations for an evaluation metric. The instructions can also cause the processor to operate the agent in the selected agent configuration. The selected agent configuration can cause the agent to adapt internal parameters of the agent according to the selected reward function.
CONTROL OVER APPLICATION PLUGINS
A system and method for implementing a plugin control mechanism. A disclosed method includes: launching an application; injecting additional functionality into the application; and utilizing the additional functionality to: detect a file processing call; evaluate the file processing call against to a set of rules to determine whether the file processing call involves execution of an extension file; and call an operating system (OS) application control function in response to determining the file processing call involves execution of the extension file, wherein the OS application control function is configured to conditionally prevent execution of the extension file.
Method and tensor traversal engine for strided memory access during execution of neural networks
A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
Hashing bucket identifiers to identify search nodes for efficient query execution
Systems and methods are disclosed for processing and executing queries in a data intake and query system. The data intake and query system receives a query identifying a set of data to be processed and a manner of processing the set of data. The data intake and query system identifies buckets that are to be searched. The data intake and query system performs a hash on bucket identifiers of the identified buckets to identify search nodes to search the buckets.