Patent classifications
G06F9/544
Virtual trusted platform modules
In some examples, a storage medium stores a plurality of information elements that relate to corresponding virtual trusted platform module (TPM) interfaces, where each respective information element of the plurality of information elements corresponds to a respective virtual machine (VM). A controller provides virtual TPMs for respective security operations. A processor resource executes the VMs to use the information elements to access the corresponding virtual TPM interfaces to invoke the security operations of the virtual TPMs, where a first VM is to access a first virtual TPM interface of the virtual TPM interfaces to request that a security operation of a respective virtual TPM be performed.
Authentication key-based DLL service
Systems and methods are provided for implementing an authentication key-based DLL service. For example, the system can expose a list of functionalities and request format, and a byte string denotes a functionality corresponding to the API. Output is received by the user after loading a DLL library maintained by a DLL provider. The system can generate a key corresponding to the functionality and transmit the key to the user. The invocation of the functionality can be performed using the keys. The shared memory space may be used for inputs from the user and outputs of the DLL. The system can perform an action based on the authentication of the keys. During any functionality advancement, the system can notify the user to unload and reload the new DLL in order to make use of the advancements.
INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, AND METHOD FOR PROCESSING INFORMATION
An apparatus includes: a storing device including regions allocated to virtual machines (VMs); a processing device executing the VMs; a relay device executing a relaying process; and a transfer processor transferring data between the regions. The processing device stores a first and second numbers associated with a used entry among first entries allocated to the transfer processor and a used entry among second entries allocated to the relay device, respectively, the first and second numbers being included in numbers associated with entries of a reception buffer in a first region allocated to a first VM; and sets a smaller first and second numbers in the processing device to a number being set in the first region and representing an entry of data read from the reception buffer.
SYSTOLIC ARRAY OF ARBITRARY PHYSICAL AND LOGICAL DEPTH
A processing apparatus includes a processing resource including a general-purpose parallel processing engine and a matrix accelerator. The matrix accelerator includes first circuitry to receive a command to perform operations associated with an instruction, second circuitry to configure the matrix accelerator according to a physical depth of a systolic array within the matrix accelerator and a logical depth associated with the instruction, third circuitry to read operands for the instruction from a register file associated with the systolic array, fourth circuitry to perform operations for the instruction via one or more passes through one or more physical pipeline stages of the systolic array based on a configuration performed by the second circuitry, and fifth circuitry to write output of the operations to the register file associated with the systolic array.
CLUSTER COMPUTING SYSTEM AND OPERATING METHOD THEREOF
A cluster computing system is provided. The cluster computing system includes: a host including a first processor and a first buffer memory; computing nodes, each of which includes a second processor and a second buffer memory configured to store data received from the host; a network configured to connect the host and the computing nodes; and storage devices respectively corresponding to the computing nodes. The first processor is configured to control a task allocator to monitor a task performance state of each of the computing nodes, select at least one of the computing nodes as a task node based on the task performance state of each of the computing nodes, and distribute a background task to the task node, and the second processor of the task node is configured to perform the background task on sorted files stored in the second buffer memory, the sorted files being received by the second buffer memory from the first buffer memory via the network.
Methods and systems for managing prioritized database transactions
A database management system for controlling prioritized transactions, comprising: a processor adapted to: receive from a client module a request to write into a database item as part of a high-priority transaction; check a lock status and an injection status of the database item; when the lock status of the database item includes a lock owned by a low-priority transaction and the injection status is not-injected status: change the injection status of the database item to injected status; copy current content of the database item to an undo buffer of the low-priority transaction; and write into a storage engine of the database item.
Asynchronous execution of creative generator and trafficking workflows and components therefor
A creative development platform includes an input interface that receives input data defining creative properties; a workflow definition store that stores creative generation workflow definitions defining a workflow related to generating a creative; a creative generation server, communicatively coupled to the workflow definitions store, to (i) receive the input data, (ii) retrieve at least one of the creative generation workflow definitions from the workflow definition store based on the input data, and (iii) generate the creative containing one or more media objects based on the input data and using the at least one workflow definition; and a network communications device operable to communicate the creative to target devices.
Information processing apparatus, information processing circuit, information processing system, and information processing method
An information processing apparatus according to an aspect of the present invention includes an information processing circuit configured to generate a finite state machine based on a predetermined matching condition with respect to sequence data of an event that is input to the information processing apparatus; to process the sequence data so as to substantially remove data that does not match the matching condition from the sequence data; and to output the processed sequence data.
Non-volatile memory based processors and dataflow techniques
A monolithic integrated circuit (IC) including one or more compute circuitry, one or more non-volatile memory circuits, one or more communication channels and one or more communication interface. The one or more communication channels can communicatively couple the one or more compute circuitry, the one or more non-volatile memory circuits and the one or more communication interface together. The one or more communication interfaces can communicatively couple one or more circuits of the monolithic integrated circuit to one or more circuits external to the monolithic integrated circuit.
Reduction mode of planar engine in neural processor
Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In a reduction mode, the planar engine circuit may process values arranged in one or more dimensions of input to generate a reduced value. The reduced values across multiple input data may be accumulated. The planar engine circuit may program a filter circuit as a reduction tree to gradually reduce the data into a reduced value. The reduction operation reduces the size of one or more dimensions of a tensor.