G06F17/153

Dynamic processing element array expansion

A computer-implemented method includes receiving a neural network model that includes a tensor operation, and dividing the tensor operation into sub-operations. The sub-operations includes at least two sub-operations that have no data dependency between the two sub-operations. The computer-implemented method further includes assigning a first sub-operation in the two sub-operations to a first computing engine, assigning a second sub-operation in the two sub-operations to a second computing engine, and generating instructions for performing, in parallel, the first sub-operation by the first computing engine and the second sub-operation by the second computing engine. An inference is then made based on a result of the first sub-operation, a result of the second sub-operation, or both. The first computing engine and the second computing engine are in a same integrated circuit device or in two different integrated circuit devices.

Environmental risk management system

The present disclosure describes devices and methods monitoring a technology environment. In particular, a computing device including a processor with computer readable instructions to access a plurality of indicators (e.g., variables) that have corresponding stored historical information. The indicators are then used to calculate a summed weights table of relative risk for each time period in the past. The summed weights table is then correlated to a target variable (e.g., a variable that documents major issues, incidents, or disruptions that occurred in the technology environment in the past). The correlation coefficient between the summed weights table and the target variable is then used to implement a machine learning algorithm in order to better determine current risk levels (e.g., relative values that predict issues, incidents, or disruption).

Cognitive-defined network management
11568345 · 2023-01-31 · ·

Techniques are described for cognitive defined network management (CDNM) that seek to perform real-time collection and analysis of raw network data from across a disaggregated wireless network and to dynamically orchestrate network management functions substantially in real time, accordingly. For example, a multi-modal artificial intelligence (AI) engine is trained to normalize the heterogeneous raw network data into homogeneous so-called “golden record data.” A repository of historical golden records can be maintained for generating data models for use in training AI network management applications. An orchestrator can operate to directing execution of pre-developed network management workflows based on results obtained from querying the trained AI network management applications with newly received (real-time) golden records.

APPARATUS FOR SOLVING CIRCUIT EQUATIONS OF PROCESSING ELEMENTS USING NEURAL NETWORK AND METHOD FOR CONTROLLING THE SAME
20230229729 · 2023-07-20 ·

According to various embodiments of the present disclosure, a method for solving circuit equations of a processing element (PE) using a neural network by a graphic processing unit (GPU) comprising: forming M aligned virtual cell arrays in the neural network, wherein the cell arrays have a height value N and a width value O, each virtual cell array corresponding to a crossbar array circuit included in a processing element; performing 3D convolution on each of the virtual cell arrays until the height value N of each of the virtual cell arrays becomes 1; inputting parameters of the memory cells of the crossbar array to each of the virtual cell arrays in which the three-dimensional convolution is performed; and solving a circuit equation of the processing element using the output value of the virtual cell arrays.

Convolution accelerator using in-memory computation

A method for accelerating a convolution of a kernel matrix over an input matrix for computation of an output matrix using in-memory computation involves storing in different sets of cells, in an array of cells, respective combinations of elements of the kernel matrix or of multiple kernel matrices. To perform the convolution, a sequence of input vectors from an input matrix is applied to the array. Each of the input vectors is applied to the different sets of cells in parallel for computation during the same time interval. The outputs from each of the different sets of cells generated in response to each input vector are sensed to produce a set of data representing the contributions of that input vector to multiple elements of an output matrix. The sets of data generated across the input matrix are used to produce the output matrix.

METHOD AND APPARATUS FOR DISTRIBUTED AND COOPERATIVE COMPUTATION IN ARTIFICIAL NEURAL NETWORKS

An apparatus and method are described for distributed and cooperative computation in artificial neural networks. For example, one embodiment of an apparatus comprises: an input/output (I/O) interface; a plurality of processing units communicatively coupled to the I/O interface to receive data for input neurons and synaptic weights associated with each of the input neurons, each of the plurality of processing units to process at least a portion of the data for the input neurons and synaptic weights to generate partial results; and an interconnect communicatively coupling the plurality of processing units, each of the processing units to share the partial results with one or more other processing units over the interconnect, the other processing units using the partial results to generate additional partial results or final results. The processing units may share data including input neurons and weights over the shared input bus.

Convolutional dynamic Boltzmann Machine for temporal event sequence

A computer-implemented method is provided for machine prediction. The method includes forming, by a hardware processor, a Convolutional Dynamic Boltzmann Machine (C-DyBM) by extending a non-convolutional DyBM with a convolutional operation. The method further includes generating, by the hardware processor using the convolution operation of the C-DyBM, a prediction of a future event at time t from a past patch of time-series of observations. The method also includes performing, by the hardware processor, a physical action responsive to the prediction of the future event at time t.

Integrated circuit chip apparatus

Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.

DECOMPOSING A DECONVOLUTION INTO MULTIPLE CONVOLUTIONS
20230016455 · 2023-01-19 ·

A deconvolution can be decomposed into multiple convolutions. Results of the convolutions constitute an output of the deconvolution. Zeros may be added to an input tensor of the deconvolution to generate an upsampled input tensor. Subtensors having the same size as the kernel of the deconvolution may be identified from the upsampled input tensor. A subtensor may include one or more input activations and one or more zeros. Subtensors having same distribution patterns of input activations may be used to generate a reduced kernel. The reduced kernel includes a subset of the kernel. The position of a weight in the reduced kernel may be the same as the positions of an input activation in the subtensor. Multiple reduced kernels may be generated based on multiple subtensors having different distribution patterns of activations. Each of the convolutions may use the input tensor and a different one of the reduced kernels.

Apparatus and method for generating ciphertext data with maintained structure for analytics capability
11558176 · 2023-01-17 · ·

A method for providing ciphertext data by a first computing device having memory includes obtaining, from the memory, plaintext data having a structure; providing the plaintext data to a structure preserving encryption network (SPEN) to generate the ciphertext data, where the structure of the plaintext data corresponds to a structure of the ciphertext data; and communicating, from the first computing device to a second computing device, the ciphertext data to permit analysis on the ciphertext data.