Patent classifications
G06F9/544
Methods and apparatuses for generating redo records for cloud-based database
Methods and apparatuses in a cloud-based database management system are described. Data in a database are stored in a plurality of pages in a page store of the database. A plurality of redo log records are received to be applied to the database. The redo log records within a predefined boundary are parsed to determine, for each given redo log record, a corresponding page to which the given log record is to be applied. The redo log records are reordered by corresponding page. The reordered redo log records are stored to be applied to the page store of the database.
Register sharing mechanism to equally allocate disabled thread registers to active threads
An apparatus is disclosed. The apparatus includes one or more processors comprising register sharing circuitry to receive meta-information indicating a number of threads that are to be disabled and provide an indication that an associated thread is disabled, a plurality of General Purpose Register Files (GRFs), wherein one or more of the plurality of GRFs is associated with one of the plurality of threads and a plurality of multiplexers coupled to the one or more GRFs to receive the indication from the register sharing circuitry and disable thread access to an associated GRF based on an indication that a thread is to be disabled.
Heuristics for selecting subsegments for entry in and entry out operations in an error cache system with coarse and fine grain segments
A memory device comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. Further, the device comprises a cache memory operable for storing a second plurality of data words, wherein each data word of the second plurality of data words is either awaiting write verification associated with the memory bank or is to be re-written into the memory bank. The cache memory is divided into a plurality of primary segments, wherein each primary segment of the cache memory is direct mapped to a corresponding segment of the plurality of segments, wherein each primary segment is sub-divided into a plurality of secondary segments, and wherein each of the plurality of secondary segments comprises at least one counter for tracking a number of entries stored therein.
Memory pipeline control in a hierarchical memory system
In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.
Telematics system for identifying manufacturer-specific controller-area network data
Methods and systems for identifying manufacturer-specific controller-area (CAN) data for a vehicle type are provided. Manufacturer-specific CAN data may be identified by processing defined CAN data having a correlation relationship with the target data and undefined manufacturer-specific CAN data for determining if there is a correlation relationship therebetween. Also provided are methods and systems for identifying and automatically collecting manufacturer-specific CAN data for a vehicle type.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
DATA TRANSMISSION METHOD AND APPARATUS
A data transmission method and apparatus are provided. The data transmission method is applied to a computer system including at least two coprocessors, for example, including a first coprocessor and a second coprocessor. A shared memory is deployed between the first coprocessor and the second coprocessor, and is configured to store data generated when subtasks are separately executed. Further, the shared memory further stores a storage address of data generated when a subtask is executed, and a mapping relationship between each subtask and a coprocessor that executes the subtask. Therefore, a storage address of data to be read by the coprocessor may be found based on the mapping relationship, and the data may further be directly read from the shared memory without being copied by using a system bus. This improves efficiency of data transmission between the coprocessors.
AGING MITIGATION
Aspects of the present disclosure control aging of a signal path in an idle mode to mitigate aging. In one example, an input of the signal path is alternately parked low and high over multiple idle periods to balance the aging of devices (e.g., transistors) in the signal path. In another example, a clock signal (e.g., a clock signal with a low frequency) is input to the signal path during idle periods to balance the aging of devices (e.g., transistors) in the signal path. In another example, the input of the signal path is parked high or low during each idle period based on an aging pattern.
OPTIMIZATION OF MEMORY USE FOR EFFICIENT NEURAL NETWORK EXECUTION
Implementations disclosed describe methods and systems to perform the methods of optimizing a size of memory used for accumulation of neural node outputs and for supporting multiple computational paths in neural networks. In one example, a size of memory used to perform neural layer computations is reduced by performing nodal computations in multiple batches, followed by rescaling and accumulation of nodal outputs. In another example, execution of parallel branches of neural node computations include evaluating, prior to the actual execution, the amount of memory resources needed to execute a particular order of branches sequentially and select the order that minimizes this amount or keeps this amount below a target threshold.
Data race detection with per-thread memory protection
Data race detection in multi-threaded programs can be achieved by leveraging per-thread memory protection technology in conjunction with a custom dynamic memory allocator to protect shared memory objects with unique memory protection keys, allowing data races to be turned into inter-thread memory access violations. Threads may acquire or release the keys used for accessing protected memory objects at the entry and exit points of critical sections within the program. An attempt by a thread to access a protected memory object within a critical section without the associated key triggers a protection fault, which may be indicative of a data race.