G06F7/36

High performance merge sort with scalable parallelization and full-throughput reduction

Disclosed herein is a novel multi-way merge network, referred to herein as a Hybrid Comparison Look Ahead Merge (HCLAM), which incurs significantly less resource consumption as scaled to handle larger problems. In addition, a parallelization scheme is disclosed, referred to herein as Parallelization by Radix Pre-sorter (PRaP), which enables an increase in streaming throughput of the merge network. Furthermore, high performance reduction scheme is disclosed to achieve full throughput.

Processor instruction to store indexes of source data elements in positions representing a sorted order of the source data elements
09766888 · 2017-09-19 · ·

A processor of an aspect includes packed data registers, and a decode unit to decode an instruction. The instruction may indicate a first source packed data to include at least four data elements, indicate a second source packed data to include at least four data elements, and indicate a destination storage location. An execution unit is coupled with the packed data registers and the decode unit. The execution unit, in response to the instruction, is to store a result packed data in the destination storage location. The result packed data may include at least four indexes that may identify corresponding data element positions in the first and second source packed data. The indexes may be stored in positions in the result packed data that are to represent a sorted order of corresponding data elements in the first and second source packed data.

Processor instruction to store indexes of source data elements in positions representing a sorted order of the source data elements
09766888 · 2017-09-19 · ·

A processor of an aspect includes packed data registers, and a decode unit to decode an instruction. The instruction may indicate a first source packed data to include at least four data elements, indicate a second source packed data to include at least four data elements, and indicate a destination storage location. An execution unit is coupled with the packed data registers and the decode unit. The execution unit, in response to the instruction, is to store a result packed data in the destination storage location. The result packed data may include at least four indexes that may identify corresponding data element positions in the first and second source packed data. The indexes may be stored in positions in the result packed data that are to represent a sorted order of corresponding data elements in the first and second source packed data.

Adaptive user interface for directories
09767207 · 2017-09-19 · ·

Systems and techniques are utilized to cluster data entries. The data entries may be part of a hierarchical organization or may be categorized based on a set of attributes (e.g., directory of contacts, catalog of multimedia files, digital books). A disclosed method comprises accessing one or more data entries and determining a number of groupings to identify. Individual ones of the data entries are assigned weights. Ranges for individual groupings are determined and the one or more data entries are placed in a grouping based on the assigned weights. The individual groups are presented to a user for selection, where the individual groups are represented by indicia. The groupings may change dynamically based on a change in the one or more of data entries, the display space, a user defined parameter, and/or other factors. A table corresponding to the data entries may be used to determine ranges for the groupings.

Adaptive user interface for directories
09767207 · 2017-09-19 · ·

Systems and techniques are utilized to cluster data entries. The data entries may be part of a hierarchical organization or may be categorized based on a set of attributes (e.g., directory of contacts, catalog of multimedia files, digital books). A disclosed method comprises accessing one or more data entries and determining a number of groupings to identify. Individual ones of the data entries are assigned weights. Ranges for individual groupings are determined and the one or more data entries are placed in a grouping based on the assigned weights. The individual groups are presented to a user for selection, where the individual groups are represented by indicia. The groupings may change dynamically based on a change in the one or more of data entries, the display space, a user defined parameter, and/or other factors. A table corresponding to the data entries may be used to determine ranges for the groupings.

System and method of reduction of irrelevant information during search
09760642 · 2017-09-12 · ·

A system including a context-entity factory configured to build a data model defining an ontology of data objects that are context-aware, the model further defining metadata tags for the data objects. The system further includes a storage device storing the data objects as stored data objects, the device further storing associated contexts for corresponding ones of the stored objects. The system further includes a reduction component configured to capture a current context value of a first data object defined in the ontology, the component further configured to compare the current context value of the first data object with stored values of the associated contexts, and wherein when the current context value does not match a particular stored value of a particular associated context, the component is further configured to remove a corresponding particular stored data object and the particular associated context from the stored data objects.

System and method of reduction of irrelevant information during search
09760642 · 2017-09-12 · ·

A system including a context-entity factory configured to build a data model defining an ontology of data objects that are context-aware, the model further defining metadata tags for the data objects. The system further includes a storage device storing the data objects as stored data objects, the device further storing associated contexts for corresponding ones of the stored objects. The system further includes a reduction component configured to capture a current context value of a first data object defined in the ontology, the component further configured to compare the current context value of the first data object with stored values of the associated contexts, and wherein when the current context value does not match a particular stored value of a particular associated context, the component is further configured to remove a corresponding particular stored data object and the particular associated context from the stored data objects.

Methods and apparatus for a multi-graph search and merge engine

Aspects of the disclosure relate to a system for amalgamating a plurality of graphs. The system may include a set of graphs. A user may input search criteria via a user interface (“UI”) module. The system may search the set of graphs for a subset of qualifying graphs that satisfy the search criteria. The subset of graphs may be merged into an amalgamated graph. Merging the graphs may include superimposing the qualifying graphs over each other at a locus. The locus may be a node or a sub-graph. The amalgamated graph may be displayed via the UI module.

System and method for sorting data elements of slabs of registers using a parallelized processing pipeline
11740868 · 2023-08-29 · ·

Aspects of the disclosure relate to determining relevant content in response to a request for information. One or more computing devices (170) may load data elements into registers (385A-385B), wherein each register is associated with at least one parallel processor in a group of parallel processors (380A-380B). For each of the parallel processors, the data elements loaded in its associated registers may be sorted, in parallel, in descending order. The sorted data elements, for each of the parallel processors, may be merged with the sorted data elements of other processors in the group. The merged and sorted data elements may be transposed and stored.

System and method for sorting data elements of slabs of registers using a parallelized processing pipeline
11740868 · 2023-08-29 · ·

Aspects of the disclosure relate to determining relevant content in response to a request for information. One or more computing devices (170) may load data elements into registers (385A-385B), wherein each register is associated with at least one parallel processor in a group of parallel processors (380A-380B). For each of the parallel processors, the data elements loaded in its associated registers may be sorted, in parallel, in descending order. The sorted data elements, for each of the parallel processors, may be merged with the sorted data elements of other processors in the group. The merged and sorted data elements may be transposed and stored.