Patent classifications
G06F9/544
VECTOR PROCESSING EMPLOYING BUFFER SUMMARY GROUPS
A vector entry of a signaling vector is registered to a buffer summary group. The buffer summary group includes one or more summary indicators for one or more buffers assigned to the buffer summary group. A command is processed that sets a vector indicator in the vector entry and based on setting the vector indicator in the vector entry, a summary indicator of the one or more summary indicators is set in the buffer summary group.
Processing data between data stores
A non-transitory computer readable medium can store machine readable instructions that when accessed and executed by a processing resource cause a computing device to perform operations. The operations can include establishing a connection between data stores (such as a relational data store and a graph engine), wherein the connection includes a shared memory buffer storing data in a data format according to internal structures of the graph engine. The connection between the data stores is bi-directional. The connection enables data that is stored in the shared memory to be processed by either of the graph engine and the relational database. Upon receiving a query, the graph engine or the relational database can be selected to process the data based on a query. The data can be processed by the selected one of the graph engine or the relational database.
TECHNIQUES FOR PROVIDING SYNCHRONOUS AND ASYNCHRONOUS DATA PROCESSING
Techniques discussed herein include dynamically providing synchronous and/or asynchronous data processing by a machine-learning model service. The machine-learning model service (“the service”) executes a stream manager application, a web interface, and a machine-learning model via a common container. The stream manager application can obtain input data (e.g., from an input data stream, a partition of an input data stream, etc.) and provide the data to the machine-learning model through the web interface using a local communication channel (e.g., a loopback interface that bypasses local network interface hardware of the computing device on which the model executes). Prediction results from the model may be provided as output data (e.g., to an output data stream, to a partition of an output data stream, etc.).
System and method for determining an amount of virtual machines for use with extract, transform, load (ETL) processes
In accordance with an embodiment, described herein are systems and methods for determining or allocating an amount, quantity, or number of compute instances or virtual machines for use with extract, transform, load (ETL) processes. In an example embodiment, a particular (e.g., optimal) number of virtual machines (VM's) can be determined by predicting ETL completion times for customers, using historical data. ETL processes can be simulated with an initial/particular number of virtual machines. If the predicted duration is greater than the desired duration, the number of virtual machines can be incremented, and the simulation repeated. Actual completion times from ETL processes can be fed back, to update a determined number of compute instances or virtual machines. In accordance with an embodiment, the system can be used, for example, to generate alerts associated with customer service level agreements (SLA's).
MANAGING SERVICES ACROSS CONTAINERS
Services can be managed across containers. A management service can obtain or compile configuration information for containerized applications and containerized services that are hosted on a computing device. The configuration information can define how a containerized application is dependent on a containerized service. Using the configuration information, the management service can establish data paths between containers to enable container services running in the containers to perform cross-container communications by which a containerized application in one container can access a containerized service in another container. The management service may also enable a container service to perform communications by which a containerized application can access services provided by the host operating system.
DATA STREAMING ACCELERATOR
Methods and apparatus relating to data streaming accelerators are described. In an embodiment, a hardware accelerator such as a Data Streaming Accelerator (DSA) logic circuitry provides high-performance data movement and/or data transformation for data to be transferred between a processor (having one or more processor cores) and a storage device. Other embodiments are also disclosed and claimed.
Parameter server and method for sharing distributed deep learning parameter using the same
Disclosed herein are a parameter server and a method for sharing distributed deep-learning parameters using the parameter server. The method for sharing distributed deep-learning parameters using the parameter server includes initializing a global weight parameter in response to an initialization request by a master process; performing an update by receiving a learned local gradient parameter from the worker process, which performs deep-learning training after updating a local weight parameter using the global weight parameter; accumulating the gradient parameters in response to a request by the master process; and performing an update by receiving the global weight parameter from the master process that calculates the global weight parameter using the accumulated gradient parameters of the one or more worker processes.
Data reuse method based on convolutional neural network accelerator
A data reuse method based on a convolutional neural network accelerator includes a tile scanning module receiving command information of a command module, the command information comprising a size of a CNN job to be divided into tile blocks; a tile scanning module according to a tile. The size of the tile generates the coordinates of the tile block and sends it to the memory request module; the memory request module generates a memory read request and sends the memory read request to the memory module; the memory module sequentially returns the tile block data to the input activation In the weight buffer unit, the input activation weight buffer unit saves the received tile block data to implement data reuse and transmits the received tile block data to the calculation processing unit PE.
Virtual switch scaling for networking applications
Examples include a method of switching a packet by a virtual switch by receiving a system call to transmit a packet from a first application running in a first container on a first core, determining a destination for the packet, obtaining a buffer in an application memory space of the destination, copying the packet to the destination application memory space, and writing an entry for the packet to a queue assigned to the destination, the destination queue being in a queue manager. The packet may then be obtained by an entity at the destination.
DATA BACKUP AND RECOVERY MANAGEMENT USING ALLOCATED DATA BLOCKS
A data backup and recovery method and system using allocated data blocks include identifying a first snapshot associated with a virtual machine; accessing changed block tracking data associated with data changes occurred in the virtual machine, the data changes corresponding to a set of changed data blocks; accessing block allocation status data associated with the set of changed data blocks; identifying one or more allocated data blocks from the set of changed data blocks that are associated with allocated status based on the block allocation status data; and storing the one or more allocated data blocks to a storage device.