G06F16/24569

Methods and systems for autonomous memory searching

Methods and systems operate to receive a plurality of search requests for searching a database in a memory system. The search requests can be stored in a FIFO queue and searches can be subsequently generated for each search request. The resulting plurality of searches can be executed substantially in parallel on the database. A respective indication is transmitted to a requesting host when either each respective search is complete or each respective search has generated search results.

SELF-LEARNING DEVICE CLASSIFIER
20170250879 · 2017-08-31 ·

Systems and methods classify unknown devices communicating over a packet-switched network based on traffic-borne characteristics comprising packet parameters, flow parameters, and/or operating-system parameters. Embodiments utilize “self-learning” to optimize the level of classification accuracy.

ON-CHIP NON-VOLATILE MEMORY (NVM) SEARCH

The disclosure relates in some aspects to on-chip processing circuitry formed within the die of a non-volatile (NVM) array to perform data searches. In some aspects, the die includes components configured to sense wordlines of stored data in the NVM array by applying voltages on the wordlines serially, and then search for an input data pattern within the serially-sensed wordlines. In some examples, the components of the die include latches and circuits configured to perform bitwise latch logic search operations. In other examples, the search components are configured with under-the-array or next-to-the-array dedicated search circuitry that uses registers and/or random access memory (RAM). Other aspects relate to a separate controller device for controlling the on-chip NVM search operations. For example, the controller may determine whether to search for data using search components of the NVM die or processors of the controller based, e.g., on a degree of fragmentation of data.

Techniques of heterogeneous hardware execution for SQL analytic queries for high volume data processing

The present invention relates to optimized access of a database. Herein are techniques to accelerate execution of any combination of ad hoc query, heterogenous hardware, and fluctuating workload. In an embodiment, a computer receives a data access request for data tuples and compiles the data access request into relational operators. A particular implementation of a particular relational operator is dynamically selected from multiple interchangeable implementations. Each interchangeable implementation contains respective physical operators. A particular hardware operator for a particular physical operator is selected from multiple interchangeable hardware operators that include: a first hardware operator that executes on first processing hardware, and a second hardware operator that executes on second processing hardware that is functionally different from the first processing hardware. A response to the data access request is generated based on: the data tuples, the particular implementation of the particular relational operator, and the particular hardware operator.

Selective utilization of graphics processing unit (GPU) based acceleration in database management

A method for the selective utilization of graphics processing unit (GPU) acceleration of database queries in database management is provided. The method includes receiving a database query in a database management system executing in memory of a host computing system. The method also includes estimating a time to complete processing of one or more operations of the database query using GPU accelerated computing in a GPU and also a time to complete processing of the operations using central processor unit (CPU) sequential computing of a CPU. Finally, the method includes routing the operations for processing using GPU accelerated computing if the estimated time to complete processing of the operations using GPU accelerated computing is less than an estimated time to complete processing of the operations using CPU sequential computing, but otherwise routing the operations for processing using CPU sequential computing.

METHOD FOR PROVIDING SCHEDULERS IN A DISTRIBUTED STORAGE NETWORK
20170272539 · 2017-09-21 ·

A method for selecting a substantially optimized scheduler from a plurality of schedulers for executing dispersed storage error functions on a distributed storage network begins with a computing device receiving a dispersed storage error functions along with an indication of measured throughput and measured latency from a requesting device. The method resumes when a scheduler is selected from the plurality of schedulers based on desired latency and throughput, while considering the characteristics of the dispersed error function being executed. The method continues with the computing device receiving a different dispersed error function and selecting a different scheduler.

Fine-grained capacity management of computing environments that may support a database

Computing capacity of a computing environment can be managed by controlling it associated processing capacity based on a target (or desired) capacity. In addition, fine-grained control over the processing capacity can be exercised. For example, a computing system can change the processing capacity (e.g., processing rate) of at least one processor operating based on a target capacity. The computing system may also be operable to change the processing capacity based on a measured processing capacity (e.g., a measured average of various processing rates of a processor taken over a period of time when a processor may have been operating at different processing rates over that period). By way of example, the processing rate of a processor can be switched between 1/8 and 2/8 of a maximum processing rate to achieve virtually any effective processing rates between them.

MESSAGE MATCHING TABLE LOOKUP METHOD, SYSTEM, STORAGE MEDIUM, AND TERMINAL
20220231945 · 2022-07-21 ·

Disclosed are a method for message match table lookup, a system, a non-transitory computer-readable storage medium and a terminal. The method for message match table lookup includes: performing on-demand data bit width compression on information of a specified part of an input message; extracting N groups of data from compressed data, performing intra-group data comparison to obtain N groups of comparison results, and performing true value splicing on the N groups of comparison results, where N is an integer greater than 1; performing match searching of a ternary content addressable memory (TCAM) by using the true value splicing result as a keyword; and searching, according to a match hit result of the TCAM, for an Action Random Access Memory (Action RAM), and outputting, by the Action RAM, a table lookup request.

INTERACTIVE CONTINUOUS IN-DEVICE TRANSACTION PROCESSING USING KEY-VALUE (KV) SOLID STATE DRIVES (SSDS)
20210390091 · 2021-12-16 ·

Various aspects include an interactive continuous in-device KV transaction processing system and method. The system includes a host device and a KV-SSD. The KV-SSD includes a command handler module to receive and process command packets from the host device, to identify KV input/output (I/O) requests associated with a KV transaction, and to prepare a per-transaction index structure. The method includes receiving a command packet from a host device, and determining, by the command handler module, whether a transaction tag associated with the KV transaction is embedded in the command packet. Based on determining that the transaction tag is not embedded in the command packet, the method includes processing one or more KV I/O requests using a main KV index structure. Based on determining that the transaction tag is embedded in the command packet, the method includes individually processing the one or more KV I/O requests using a per-transaction index structure.

ASSET TRACKING OF COMMUNICATION EQUIPMENT VIA MIXED REALITY BASED LABELING

A machine vision system is provided including a camera, a processor, and a device memory including computer program code stored thereon. The computer program code is configured, when executed by the processor, to receive an image, from the camera, including at least one readable digital label associated with communication equipment, determine if an anchor label is present in the image, receive equipment information based on the anchor label and generate a search matrix based on the equipment information and the anchor label The search matrix includes one or more search matrix locations of assets associated with the communication equipment.