Patent classifications
G06F9/544
Multithreaded lossy queue protocol
Methods and systems for managing a circular queue, or ring buffer, are disclosed. One method includes storing data from a producer into the ring buffer, and receiving a data read request from a consumer from among a plurality of consumers subscribed to read data from the ring buffer. After obtaining data from a location in the ring buffer in response to the data read request, it is determined if the location has been overrun by the producer. If it is determined that the location has been overrun by the producer, the data is discarded by the consumer. Otherwise, the data is consumed. Depending on the outcome, a miss counter or a read counter may be incremented.
PRESCRIPTIVE ANALYTICS-BASED PERFORMANCE-CENTRIC DYNAMIC SERVERLESS SIZING
A multi-layer serverless sizing stack may determine a compute sizing correction for a serverless function. The serverless sizing stack may analyze historical data to determine a base compute allocation and compute buffer range. The serverless sizing stack may traverse the compute buffer range in an iterative analysis to determine a compute size for the serverless function to support efficient computational-operation when the serverless function is instantiated.
NEURAL NETWORK PROCESSING ASSIST INSTRUCTION
A first processor processes an instruction configured to perform a plurality of functions. The plurality of functions includes one or more functions to operate on one or more tensors. A determination is made of a function of the plurality of functions to be performed. The first processor provides to a second processor information related to the function. The second processor is to perform the function. The first processor and the second processor share memory providing memory coherence.
TECHNIQUES TO ENABLE STATEFUL DECOMPRESSION ON HARDWARE DECOMPRESSION ACCELERATION ENGINES
A hardware decompression acceleration engine including: an input buffer for receiving to-be-decompressed data from a software layer of a host computer; a decompression processing unit coupled to the input buffer for decompressing the to-be-decompressed data, the decompression processing unit further receiving first and second flags from the software layer of the host computer, wherein the first flag is indicative of a location of the to-be-decompressed data in a to-be-decompressed data block and the second flag is indicative of a presence of an intermediate state; and an output buffer for storing decompressed data from the decompression processing unit.
FLEXIBLE SHARING IN SHARED COMPUTER ENVIRONMENTS
A sharable resource of a first user's environment is identified. The sharable resource is configured as sharable in a shared computer environment. A matching resource that is sufficiently similar to the sharable resource is located. The matching resource is used by pre-existing users of the shared computer environment. Agreement from the pre-existing users for the first user to access the matching resource is obtained. The first user is then provided access to the matching resource.
COOPERATIVE INPUT/OUPUT OF ADDRESS MODES FOR INTEROPERATING PROGRAMS
Aspects of the invention include creating a first file control block in a primary runtime environment with a first addressing mode and a second file control block in a second runtime environment with a second addressing mode, where both the first file control block and the second file control block describe a status of a first file of a caller program in the primary runtime environment. The parameters of the first file of the caller program in the primary runtime environment are passed to a target callee program in the secondary runtime environment. An anchor is added in the first file control block as a link to the second file control block. The first file control block are the second file control block synchronized with updates to the first file in the primary runtime environment and the passed parameters of the first file in the secondary runtime environment.
SYSTEMS AND METHODS FOR REDUCING CONGESTION ON NETWORK-ON-CHIP
Systems or methods of the present disclosure may provide a programmable logic device including a network-on-chip (NoC) to facilitate data transfer between one or more main intellectual property components (main IP) and one or more secondary intellectual property components (secondary IP). To reduce or prevent excessive congestion on the NoC, the NoC may include one or more traffic throttlers that may receive feedback from a data buffer, a main bridge, or both and adjust data injection rate based on the feedback. Additionally, the NoC may include a data mapper to enable data transfer to be remapped from a first destination to a second destination if congestion is detected at the first destination.
TIME AWARE CACHING
The present disclosure relates to time aware caching. One method includes receiving an API request for data from a database, wherein the request defines a time window associated with the data, creating a first and second query based on the request, wherein the first query corresponds to a first chunk of the time window, and wherein the second query corresponds to a second chunk of the time window, hashing a first statement associated with the first query to produce a first key and hashing a second statement associated with the second query to produce a second key, retrieving a first portion of the data corresponding to the first chunk of the time window from cache responsive to a determination that the first key is in the cache, and retrieving a second portion of the data corresponding to the second chunk of the time window from the database responsive to a determination that the second key is not in the cache.
ASYNCHRONOUS COMPLETION NOTIFICATION IN A MULTI-CORE DATA PROCESSING SYSTEM
Asynchronous completion notification is provided in a data processing system including one or more cores each executing one or more threads. A hardware unit of the data processing system receives and enqueues a request for processing and a source tag indicating at least a thread and core that issued the request. The hardware unit maintains a pointer to a completion area in a memory space. The completion area includes a completion granule for the hardware unit and thread. The hardware unit performs the processing requested by the request and computes an address of the completion granule based on the pointer and the source tag. The hardware unit then provides completion notification for the request by updating the completion granule with a value indicating a completion status.
Memory protection circuit and memory protection method
To provide a memory protection circuit and a memory protection method suitable for quick data transfer between a plurality of virtual machines via a common memory, according to an embodiment, a memory protection circuit includes a first ID storing register that stores therein an ID of any of a plurality of virtual machines managed by a hypervisor, an access determination circuit that permits the virtual machine having the ID stored in the first ID storing register to access a memory, a second ID storing register that stores therein an ID of any of the virtual machines, and an ID update control circuit that permits the virtual machine having the ID stored in the second ID storing register to rewrite the ID stored in the first ID storing register.