Patent classifications
G06F16/24569
Hardware accelerator performing search using inverted index structure and search system including the hardware accelerator
A hardware accelerator includes a block processing circuit configured to read a block from a list stored in an inverted index structure; and a search core configured to extract a document number out of a read block read by the block processing circuit and to calculate a score corresponding to the document number.
PROCESSING LARGE QUERY RESULTS IN A DATABASE ACCELERATOR ENVIRONMENT
A computer-implemented method for facilitating large data transfers from a first data management system to a second data management system is disclosed. The method comprises receiving data from the first data management system by a first buffer component, rerouting, upon the first buffer component reaching a predefined fill-level, dynamically the received data to a second buffer component, wherein the second buffer component is adapted to process the rerouted received data, forwarding, by the second buffer component, the rerouted data once the first buffer component is again ready for receiving the rerouted data from the second buffer component, and sending, by a sending component, the data buffered in the first component to the second data management system.
STORAGE OF DATA STRUCTURES
A method, a system, and a computer program product for placement or storage of data structures in memory/storage locations. A type of a data structure for storing data and a type of data access to the data structure are determined. The type of data access includes a first and a second type of data access. A frequency of each type of access to each type of data structure accessed by a query is determined. Using the determined frequency, a number of first type of data accesses to the data structure is compared to a number of second type of accesses to the data structure. The numbers of first and second types of data access are compared to a predetermined threshold percentage of a total number of data accesses to the data structure. Based on the comparisons, a physical memory location for storing data is determined.
Methods and systems for integrating machine learning/analytics accelerators and relational database systems
A method for database management is disclosed. The method may include receiving an algorithm from a user. Based on the algorithm, a hierarchical dataflow graph (hDFG) may be generated. The method may further include generating an architecture for a chip based on the hDFG. The architecture for a chip may retrieve a data table from a database. The data table may be associated with the architecture for a chip. Finally, the algorithm may be executed against the data table, such that an action included in the algorithm is performed.
Search result ranking according to inventory information
A method for returning a results page responsive to a user search query, such as a search query on a web site, may include receiving a search query from a user, determining, responsive to the query, a set of relevant products from a plurality of product listings based on a similarity of the user query to the respective product listings, retrieving inventory information respective of each of the relevant products, the inventory information comprising one or more available fulfillment channels respective of each of the relevant products, ranking the relevant products with respect to each other according to the inventory information, and returning, to the user, a search result comprising a list of the relevant products, ordered according to the ranking.
METHODS AND SYSTEMS FOR INTEGRATING MACHINE LEARNING/ANALYTICS ACCELERATORS AND RELATIONAL DATABASE SYSTEMS
A method for database management that includes receiving an algorithm from a user. Based on the algorithm, a hierarchical dataflow graph (hDFG) may be generated. The method may further include generating an architecture for a chip based on the hDFG. The architecture for a chip may retrieve a data table from a database. The data table may be associated with the architecture for a chip. Finally, the algorithm may be executed against the data table, such that an action included in the algorithm is performed.
Near-memory acceleration for database operations
Despite the increase of memory capacity and CPU computing power, memory performance remains the bottleneck of in-memory database management systems due to ever-increasing data volumes and application demands. Because the scale of data workloads has out-paced traditional CPU caches and memory bandwidth, one can improve data movement from memory to computing units to improve performance in in-memory database scenarios. A near-memory database accelerator framework offloads data-intensive database operations via or to a near-memory computation engine. The database accelerator's system architecture can include a database accelerator software module/driver and a memory module with a database accelerator engine. An application programming interface (API) can be provided to support database accelerator functionality. Memory of the database accelerator can be directly accessible by the CPU.
STORAGE CLUSTER CONFIGURATION
Storage cluster configuration for computing resources of a storage system is disclosed. A cluster configuration can be based on client indicated cluster criteria. Further, a cluster configuration can be based on non-client indicated criteria, such as, system requirements, regulatory compliance, industry best practices, etc. Determined candidate cluster configurations that can satisfy client criteria can be organized according to a selection preference, to enable selection of a preferred cluster configuration from the candidate cluster configurations. Candidate cluster configurations can result from recursive combinatorial searching, with pruning, of an entity space resulting from an ontological analysis of storage system computing resources. Pruning can be accelerated based on heuristic selection of a fork attribute. A K-D tree subjected to dimensional normalization can be employed to interpolate an attribute value. Interpolation can be performed from predetermined sets of data, for example from storage system models or historical storage system performance.
Data filtering using a plurality of hardware accelerators
Techniques are provided for data filtering using hardware accelerators. An apparatus comprises a processor, a memory and a plurality of hardware accelerators. The processor is configured to stream data from the memory to a first one of the hardware accelerators and to receive filtered data from a second one of the hardware accelerators. The plurality of hardware accelerators are configured to filter the streamed data utilizing at least one bit vector partitioned across the plurality of hardware accelerators. The hardware accelerators may be field-programmable gate arrays.
Massively parallel computing system for processing of values in a distributed ledger blockchain system
A computer that operates with a distributed ledger system, and stores a copy of a distributed ledger file that is stored by multiple different client computers. The distributed ledger file having plural values therein, and the distributed ledger file also having encryption values that verify the values in the distributed ledger file. The computer processes the values to verify at least some of the values in the distributed ledger file using the encryption values in a way that ascertains a cryptographic accuracy of the values, and to create a report indicating values that have been verified using the encryption values. The computer can use its GPU to process these values in parallel. The computer can also set new sequence numbers using a distributed system, for new values to be added to the distributed ledger.