Patent classifications
G06F9/223
Systems and methods for processing software application notifications
Methods and systems for managing notifications relating to execution of microservices are described herein. A format of notifications relating to execution of a plurality of microservices may be defined. The format may provide that all notifications generated based on the format comprise code. The code may indicate, for example, an identity of one of a plurality of microservices, a version of the code, an occurrence of an issue in execution of the one of the plurality of microservices, and/or one or more scripts which may be executed to address an issue of the notification. Two or more notifications may be received, and the one or more notifications may be formatted based on the defined format. A third notification may be generated based on a comparison of the two or more notifications. The third notification may be transmitted to a computing device.
Apparatus and methods for vector operations
Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.
MICROPROCESSOR WITH TIME COUNTER FOR STATICALLY DISPATCHING INSTRUCTIONS WITH PHANTOM REGISTERS
A processor includes a time counter and provides a method for statically dispatching fused instructions with first operation and second operation with preset execution times for forwarding of result data from the first operation to the second operation without writing to a register, and where the preset execution times are based on a time count from the time counter provided to an execution pipeline.
INSTRUCTION TRANSMITTING UNIT, INSTRUCTION EXECUTION UNIT, AND RELATED APPARATUS AND METHOD
This disclosure provides an instruction transmitting unit, an instruction execution unit, and a related apparatus and method. The instruction transmitting unit includes: an instruction splitter adapted to split a to-be-executed vector instruction into microinstructions; a microinstruction index fetcher adapted to acquire a number-of-effective-elements index of the microinstructions resulting from the splitting based on an element range involved in the microinstructions; an index comparison subunit adapted to compare the acquired number-of-effective-elements index with a first index, where the first index is a number-of-effective-elements index of a fault-only-first microinstruction whose processing has not been completed; and a microinstruction transmission controller adapted to transmit the microinstructions resulting from the splitting to a vector execution unit for execution when the number-of-effective-elements index is less than the first index. This disclosure improves operating efficiency of subsequent vector instructions when a fault-only-first vector loading instruction is involved in chaining.
Data sharing system and data sharing method therefor
The present disclosure provides a processing device for performing generative adversarial network and a method for machine creation applying the processing device. The processing device includes a memory configured to receive input data including a random noise and reference data, and store a discriminator neural network parameter and a generator neural network parameter, and the processing device further includes a computation device configured to transmit the random noise input data into a generator neural network and perform operation to obtain a noise generation result, and input both of the noise generation result and the reference data into a discriminator neural network and perform operation to obtain a discrimination result, and further configured to update the discriminator neural network parameter and the generator neural network parameter according to the discrimination result.
SYSTEMS AND METHODS FOR DISTRIBUTED BUSINESS PROCESSMANAGEMENT
Systems and methods for distributed business process management are disclosed. In one embodiment, in an information processing apparatus comprising at least one computer processor, a method for configuration-driven distributed orchestration using different software components to execute a complex business process may include: (1) receiving a request for a runtime flow from a flow management adapter; (2) reading a flow configuration from the request; (3) creating an instance of the runtime flow; (4) initiating a service call to each component in the runtime flow; (5) creating a runtime instance in a database along with a state of each dependency in the runtime flow; and in response to external dependencies being met: (6) building and sending message to the components using a message builder; (7) initiating flow actions via an event-driven scheduler; and (8) making a service call to at least one of the components using the message builders.
Flexible command pointers to microcode operations
Disclosed are apparatuses, methods, and computer-readable media for providing flexible command pointers to microcodes in a memory device. In one embodiment, a method is disclosed comprising receiving a command to access a memory device; accessing a configuration parameter; identifying a program counter value based on the configuration parameter and the command; and loading and executing a microcode based on the program counter.
Microkernel-based software optimization of neural networks
Disclosed are systems and methods related to providing for the optimized software implementations of artificial intelligence (“AI”) networks. The system receives operations (“ops”) consisting of a set of instructions to be performed within an AI network. The system then receives microkernels implementing one or more instructions to be performed within the AI network for a specific hardware component. Next, the system generates a kernel for each of the operations. Generating the kernel for each of the operations includes configuring input data to be received from the AI network; detecting a specific hardware component to be used; selecting one or more microkernels to be invoked by the kernel based on the detection of the specific hardware component; and configuring output data to be sent to the AI network as a result of the invocation of the microkernel(s).
ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME
An electronic device may include a semiconductor memory structured to include a plurality of memory cells, wherein each of the plurality of memory cells may comprise: a first electrode layer; a second electrode layer; and a selection element layer disposed between the first electrode layer and the second electrode layer to electrically couple or decouple an electrical connection between the first electrode layer and the second electrode layer based on a magnitude of an applied voltage or an applied current with respect to a threshold magnitude, wherein the selection element layer has a dopant concentration profile which decreases from an interface between the selection element layer and the first electrode layer toward an interface between the selection element layer and the second electrode layer.
Configuring DevOps Pipelines Across Domains And Thresholds
The present invention extends configuring development and operations pipelines across domains. A pipeline manager can form and manage pipelines that span any combination of domains and any combination of: public cloud resources, private cloud resources, user on-premise resources, etc., in accordance with appropriate (cloud andor on-premise) profile information. The pipeline manager can (re)configure a pipeline as appropriate to address alterations to workflows, upgrades to DevOps tools, removal of functionality from a workflow, etc. A pipeline framework enables customers to build no-code pipelines spanning domains for various use cases in a plug and play manner (Software engineering and SDLC pipelines, Salesforce Cl/CD pipelines, AI/ML, SaaS applications, Infrastructure as a code (IaC), etc). The pipeline framework enables users to integrate collaboration tools, notifications, and approval gates offering thresholds at each and every step. In addition, the pipeline framework captures logs and provides a summary via livestream and also upon completion of each pipeline activity and after each pipeline.