Patent classifications
G06F16/24569
In-cloud and constant time scanners
The technology disclosed relates to in-cloud, constant time content scanning. In particular, it relates to obtaining administrative access to a cloud environment account for bulk content scanning of storage resources, and deploying serverless, containerized scanners to run locally on the cloud environment account, including queuing objects in the cloud environment account, partitioning the objects into a plurality of object chunks, and depending upon a M number of object chunks in the plurality of object chunks, initializing a N number of instances of the serverless, containerized scanners, where M>>N. Each initialized serverless, containerized scanner scans a corresponding object chunk exactly once to detect a multiplicity of different data patterns.
Robotics application development and monitoring over distributed networks
A robotic device management service obtains, from a client device operating in a first network, a request to obtain data from a robotic device operating in a second network. In response to the request, the robotic device management service issues a token to the client device that can be provided in future queries to obtain the data. The robotic device management service provides parameters of the request to the robotic device to cause the robotic device to generate and provide the data to the robotic device management service. In response to another request to obtain the data, where the other request includes the token, the robotic device management service queries a database to determine whether the data is available from a storage location of the service. If the data is available, the service provides the data to the client device to fulfill the other request.
High-speed graph processor for graph searching and simultaneous frontier determination
A computer architecture for graph processing employs a high-bandwidth memory closely coupled to independent processing elements for searching through a graph using a first set of processing elements operating simultaneously to determine neighbors to a current frontier and second processing elements operating simultaneously to determine a next frontier, this process being repeated to search through graph nodes.
Effective and scalable building and probing of hash tables using multiple GPUs
Described approaches provide for effectively and scalably using multiple GPUs to build and probe hash tables and materialize results of probes. Random memory accesses by the GPUs to build and/or probe a hash table may be distributed across GPUs and executed concurrently using global location identifiers. A global location identifier may be computed from data of an entry and identify a global location for an insertion and/or probe using the entry. The global location identifier may be used by a GPU to determine whether to perform an insertion or probe using an entry and/or where the insertion or probe is to be performed. To coordinate GPUs in materializing results of probing a hash table a global offset to the global output buffer may be maintained in memory accessible to each of the GPUs or the GPUs may compute global offsets using an exclusive sum of the local output buffer sizes.
Dynamic reassignment in a search and indexing system
Dynamic reassignment of search processes into workload pools includes receiving a search query to search at least one data store, assigning the search query to a first workload pool, and executing the search query using a first hardware resource in the first workload pool, the first hardware resource corresponding to a first portion of a hardware device. Dynamic reassignment further includes receiving, while executing the search query, an update command to move the search query to a second workload pool, moving, while executing the search query, the search query to the second workload pool; and continuing execution of the search query using a second hardware resource in the second workload pool. The second hardware resource corresponds to a second portion of the hardware device.
Storage system of key-value store which executes retrieval in processor and control circuit, and control method of the same
According to one embodiment, a storage system includes a processor, a storage device, and a first memory. The storage device includes a nonvolatile memory, a control circuit, and a second memory. The processor retrieves, based on a retrieval key and retrieval information stored in the first memory, location information of data including the retrieval key and a value, and transmits the location information and the retrieval key to the control circuit. The control circuit reads the data from the nonvolatile memory based on the location information and the retrieval key, stores the data in the second memory, retrieves the value corresponding to the retrieval key from the data, and transmits the value to the processor.
NEAR-MEMORY ACCELERATION FOR DATABASE OPERATIONS
Despite the increase of memory capacity and CPU computing power, memory performance remains the bottleneck of in-memory database management systems due to ever-increasing data volumes and application demands. Because the scale of data workloads has out-paced traditional CPU caches and memory bandwidth, one can improve data movement from memory to computing units to improve performance in in-memory database scenarios. A near-memory database accelerator framework offloads data-intensive database operations via or to a near-memory computation engine. The database accelerator's system architecture can include a database accelerator software module/driver and a memory module with a database accelerator engine. An application programming interface (API) can be provided to support database accelerator functionality. Memory of the database accelerator can be directly accessible by the CPU.
MANAGING QUERIES TO NON-RELATIONAL DATABASES WITH MULTIPLE PATHS TO STORAGE SYSTEM
A computer-implemented method, system and computer program product for managing queries to non-relational databases. A query to retrieve an object from a storage system of a non-relational database system is received from an application. A disk volume of the storage system is then identified from a record of the requested object. Upon identifying the disk volume, the logical and/or physical input/output connections to the identified disk volume that are tagged to dissimilar CPU cores of the storage system are identified. One of the identified logical and/or physical input/output connections is then selected based on the input/output access characteristics of the identified logical and/or physical input/output connections, the CPU core speeds for the CPU cores and a determined level of urgency for the retrieval of the object. The request is then sent to the storage system over the selected input/output connection.
Cloud infrastructure detection with resource path tracing
The technology disclosed relates to streamlined analysis of infrastructure posture of a cloud environment. In particular, it relates to accessing permissions data and access control data for pairs of compute resources and storage resources in the cloud environment, tracing network communication paths between the pairs of the compute resources and the storage resources based on the permissions data and the access control data, and constructing a cloud infrastructure map that graphically depicts the pairs of the compute resources and the storage resources as nodes, and the network communication paths as edges between the nodes.
Systems and methods for improving cache efficiency and utilization
- Altug Koker ,
- Joydeep Ray ,
- Ben Ashbaugh ,
- Jonathan Pearce ,
- Abhishek Appu ,
- Vasanth Ranganathan ,
- Lakshminarayanan Striramassarma ,
- Elmoustapha Ould-Ahmed-Vall ,
- Aravindh Anantaraman ,
- Valentin Andrei ,
- Nicolas Galoppo von Borries ,
- Varghese George ,
- Yoav Harel ,
- Arthur Hunter, JR. ,
- Brent Insko ,
- Scott Janus ,
- Pattabhiraman K ,
- Mike Macpherson ,
- Subramaniam Maiyuran ,
- Marian Alin Petre ,
- Murali Ramadoss ,
- Shailesh Shah ,
- Kamal Sinha ,
- Prasoonkumar Surti ,
- Vikranth Vemulapalli
Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache coupled to the processing resources. The cache controller is configured to control cache priority by determining whether default settings or an instruction will control cache operations for the cache.