Patent classifications
G06F16/902
ADAPTIVE MATCH INDEXES
Determine first count of first records storing first value in first field, second count of second records storing second value in second field, third count of third records storing third value in third field. Determine count threshold using first, second and third counts, dispersion measure based on dispersion of values stored in second field by first records and other dispersion measure based on other dispersion of values stored in third field by first records. Train machine-learning model to determine dispersion measure threshold based on dispersion and other dispersion measures. If first count is greater than count threshold, and dispersion measure is greater than dispersion measure threshold, create match index based on first and second fields. Receive prospective record storing first value in first field, second value in second field. Use match index to identify record storing first value in first field, second value in second field as matching prospective record.
Computer architecture for training a correlithm object processing system
A correlithm object processing system that includes a trainer configured to send a node entry request to a node engine that triggers the node engine to generate an entry in a node table. The trainer is further configured to receive a source correlithm object and a target correlithm object in response to sending the node entry request. The trainer is further configured to send a real world input value and the source correlithm object to a sensor engine which triggers the sensor engine to generate an entry in a sensor table linking the real world input value and the source correlithm object. The trainer is further configured to send a real world output value and the target correlithm object to an actor engine which triggers the actor engine to generate an entry in an actor table linking the real world output value and the target correlithm object.
Neural network output layer for machine learning
Techniques for a neural network output layer for machine learning are disclosed. A plurality of processing elements within a reconfigurable fabric is configured to implement a data flow graph, where the data flow graph implements a neural network. The data flow graph can include machine learning or deep learning. A layer is implemented, within the neural network, that maps a first vector of real values to a second vector of real values bounded by zero and one, where the second vector sums to a value of one using fixed-point calculations. The layer can include a final layer within the neural network. The layer that maps the first vector includes a Softmax function. Results of the neural network are classified based on a value of the second vector. The classifying can include part of a machine learning or a deep learning process.
Grand unified file indexing
Systems and methods are disclosed for a unified file index for a file system. In one example, a Grand Unified File Index (GUFI) includes a tree replicating the directory hierarchy of one or more primary filesystems, and individual metadata stores for each directory. The GUFI tree permits fast traversal, efficient user space access controls, and simple tree directed operations such as renames, moves, or permission changes. In some examples, the individual metadata stores can be implemented as embedded databases on flash storage for speed. In some examples, use of summary tables at the directory or subtree level can eliminate wasteful executions, prune tree traversal, and further improve performance. In various examples, efficient operation can be achieved from laptop to supercomputer scale, across a wide mix of file distributions and filesystems.
Storage handling guidance for host input/output operations
Method and system are provided for storage handling guidance for host input/output (I/O) operations. The method includes: providing a guidance array of indications of storage operations handling instructions, the guidance array having multiple dimensions of performance characteristics with each dimension having multiple levels; and associating a reference vector with one or more I/O operations, wherein the reference vector points to a level for each dimension of the array to obtain an indication of a storage operations handling instruction at an intercept of the dimension levels for application by a storage system controller for the one or more I/O operations.
Closing a plurality of webpages in a browser
Embodiments of the present disclosure relate to a method for closing a plurality of webpages in a browser. According to the method, one or more records are acquired in response to receiving an instruction to close a first webpage. Each record comprises at least two URLs having a parent-child relationship. A URL chain of the first webpage is acquired based on the acquired one or more records in response to receiving an instruction to close a plurality of webpages related to the URL chain. The URL chain consists of a plurality of URLs having a multi-level parent-child relationship. The plurality of webpages related to the URL chain are closed.
METHOD, APPARATUS, AND COMPUTER-READABLE MEDIUM FOR DATA ASSET RANKING
Systems, methods, and related techniques and apparatus containing instructions which when executed by one or more computing devices for determining dataset rankings by determining a lineage ordering requirement for a collection of datasets; determining, from the lineage order requirement, one or more first lineage level datasets from the collection of datasets; generating one or more first lineage level asset ranks respectively for each one of the one or more first lineage level datasets, determining at least one second lineage level dataset having an outflow to the one or more first lineage level datasets; and generating a first dataset rank for the at least one second lineage level dataset as a first function of the outflow and at least one of the one or more first lineage level asset ranks.
Efficient trickle updates in large databases using persistent memory
Systems, methods, and computer-readable media for storing data in a data storage system using a child table. In some examples, a trickle update to first data in a parent table is received at a data storage system storing the first data in the parent table. A child table storing second data can be created in persistent memory for the parent table. Subsequently the trickle update can be stored in the child table as part of the second data stored in the child table. The second data including the trickle update stored in the child table can be used to satisfy, at least in part, one or more data queries for the parent table using the child table.
Computer architecture for processing correlithm objects using a selective context input
A device configured to emulate a correlithm object processing system comprises a memory and one or more processors. The memory stores a mapping table that includes multiple context value entries, multiple corresponding source value entries, and multiple corresponding target value entries. Each context value entry includes a correlithm object. The one or more processors receive at least one input source value and a context input value. The one or more processors identify a context value entry from the mapping table that matches the context input value based at least in part upon n-dimensional distances between the context input value and each of the context value entries. The one or more processors identify a portion of the source value entries corresponding to the identified context value entry, and further identifies a source value entry that matches the input source value. The one or more processors identify a target value entry corresponding to the identified source value entry.
RANKING RESULTS OF SEARCHES OF DATABASES
A computer system is configured to receive a plurality of previous user selection by a user of previous database entries, each of which has as plurality of database field. The computer system is configured to analyze the plurality of previous user selections to identify how frequently the same values are included in the various previous database entries. The computer system is configured to determine weights for the various database fields and rank subsequent search results for a subsequent search of the database based on the determined weights.