G06F9/28

Microprocessor using compressed and uncompressed microcode storage

A microprocessor includes compressed and uncompressed microcode memory storages, having N-bit wide and M-bit wide addressable words, respectively, where N<M. The microprocessor also includes a fetch unit, an instruction translator, and an execution stage. When the instruction translator receives an architectural instruction, it writes information identifying source and destination registers specified by the architectural instruction to an indirection register. It also issues one or more fetch addresses to retrieve a sequence of one or more microcode instructions from one of the uncompressed microcode memory storage and the compressed microcode memory storage to implement the architectural instruction. It merges information in the indirection register with the sequence of one or more microcode instructions to generate a sequence of one or more implementing microinstructions.

AUTOMATIC SCALING OF MICROSERVICES APPLICATIONS
20220365779 · 2022-11-17 ·

A device may receive information identifying a set of tasks to be executed by a microservices application that includes a plurality of microservices. The device may determine an execution time of the set of tasks based on a set of parameters and a model. The set of parameters may include a first parameter that identifies a first number of instances of a first microservice of the plurality of microservices, and a second parameter that identifies a second number of instances of a second microservice of the plurality of microservices. The device may compare the execution time and a threshold. The threshold may be associated with a service level agreement. The device may selectively adjust the first number of instances or the second number of instances based on comparing the execution time and the threshold.

AUTOMATIC SCALING OF MICROSERVICES APPLICATIONS
20220365779 · 2022-11-17 ·

A device may receive information identifying a set of tasks to be executed by a microservices application that includes a plurality of microservices. The device may determine an execution time of the set of tasks based on a set of parameters and a model. The set of parameters may include a first parameter that identifies a first number of instances of a first microservice of the plurality of microservices, and a second parameter that identifies a second number of instances of a second microservice of the plurality of microservices. The device may compare the execution time and a threshold. The threshold may be associated with a service level agreement. The device may selectively adjust the first number of instances or the second number of instances based on comparing the execution time and the threshold.

PROCESSOR WITH MEMORY CONTROLLER INCLUDING DYNAMICALLY PROGRAMMABLE FUNCTIONAL UNIT

A processor including a memory controller for interfacing an external memory and a programmable functional unit (PFU). The PFU is programmed by a PFU program to modify operation of the memory controller, in which the PFU includes programmable logic elements and programmable interconnectors. For example, the PFU is programmed by the PFU program to add a function or otherwise to modify an existing function of the memory controller enhance its functionality during operation of the processor. In this manner, the functionality and/or operation of the memory controller is not fixed once the processor is manufactured, but instead the memory controller may be modified after manufacture to improve efficiency and/or enhance performance of the processor, such as when executing a corresponding process.

Analyzing data using a hierarchical structure
09785847 · 2017-10-10 · ·

Apparatus, systems, and methods for analyzing data are described. The data can be analyzed using a hierarchical structure. One such hierarchical structure can comprise a plurality of layers, where each layer performs an analysis on input data and provides an output based on the analysis. The output from lower layers in the hierarchical structure can be provided as inputs to higher layers. In this manner, lower layers can perform a lower level of analysis (e.g., more basic/fundamental analysis), while a higher layer can perform a higher level of analysis (e.g., more complex analysis) using the outputs from one or more lower layers. In an example, the hierarchical structure performs pattern recognition.

METHOD FOR HIGH-SPEED PARALLEL PROCESSING FOR ULTRASONIC SIGNAL BY USING SMART DEVICE

The present invention relates to a method for high-speed parallel processing for an ultrasonic signal, the method used for generation of an ultrasonic image by a smart device, which is provided with a mobile graphic processing unit (GPU), by receiving an input of an ultrasonic signal. The method comprises the steps of: receiving an input of an ultrasonic signal beam-formed by means of a first rendering cycle, removing a DC component from the ultrasonic signal, and then separating an in-phase component and a quadrature component from the ultrasonic signal, from which the DC component has been removed, and separately outputting same; a smart device performing quadrature demodulation and envelope detection processing for the ultrasonic signal, having the in-phase component and the quadrature component, by means of a second rendering cycle; and the smart device performing scan conversion for the ultrasonic signal, which has been obtained as the result of the second rendering cycle, by means of a fifth rendering cycle, wherein the rendering cycles are formed as a graphics pipeline structure comprising a vertex shader procedure, a rasterizer procedure, and a fragment shader procedure. A method for high-speed parallel processing for an ultrasonic signal by using a smart device, according to the present invention, enables high-speed parallel processing for an ultrasonic signal by means of a mobile GPU inside a smart device even in a mobile-based environment instead of a PC-based environment, thereby enabling the providing of an image having a frame rate that is useful for medical diagnosis.

METHOD FOR HIGH-SPEED PARALLEL PROCESSING FOR ULTRASONIC SIGNAL BY USING SMART DEVICE

The present invention relates to a method for high-speed parallel processing for an ultrasonic signal, the method used for generation of an ultrasonic image by a smart device, which is provided with a mobile graphic processing unit (GPU), by receiving an input of an ultrasonic signal. The method comprises the steps of: receiving an input of an ultrasonic signal beam-formed by means of a first rendering cycle, removing a DC component from the ultrasonic signal, and then separating an in-phase component and a quadrature component from the ultrasonic signal, from which the DC component has been removed, and separately outputting same; a smart device performing quadrature demodulation and envelope detection processing for the ultrasonic signal, having the in-phase component and the quadrature component, by means of a second rendering cycle; and the smart device performing scan conversion for the ultrasonic signal, which has been obtained as the result of the second rendering cycle, by means of a fifth rendering cycle, wherein the rendering cycles are formed as a graphics pipeline structure comprising a vertex shader procedure, a rasterizer procedure, and a fragment shader procedure. A method for high-speed parallel processing for an ultrasonic signal by using a smart device, according to the present invention, enables high-speed parallel processing for an ultrasonic signal by means of a mobile GPU inside a smart device even in a mobile-based environment instead of a PC-based environment, thereby enabling the providing of an image having a frame rate that is useful for medical diagnosis.

Host-directed multi-layer neural network processing via per-layer work requests

In disclosed approaches of neural network processing, a host computer system copies an input data matrix from host memory to a shared memory for performing neural network operations of a first layer of a neural network by a neural network accelerator. The host instructs the neural network accelerator to perform neural network operations of each layer of the neural network beginning with the input data matrix. The neural network accelerator performs neural network operations of each layer in response to the instruction from the host. The host waits until the neural network accelerator signals completion of performing neural network operations of layer i before instructing the neural network accelerator to commence performing neural network operations of layer i+1, for i≥1. The host instructs the neural network accelerator to use a results data matrix in the shared memory from layer i as an input data matrix for layer i+1 for i≥1.

COMPUTER MANAGEMENT OF MICROSERVICES FOR MICROSERVICE BASED APPLICATIONS

A plurality of executing microservices associated with respective features of an application are managed using a computer. The microservices are operating within a container orchestrator platform. Calls made to a plurality of microservices related to an application running on a container orchestrator platform are traces by the computer. A status map is generated by the computer of the plurality of microservices related to the application based on the tracing of the calls. The status map is published such that the status map is accessible to the plurality of microservices, and an action by one of the microservices of the plurality of microservices in response to the status map is initiated.

COMPUTER MANAGEMENT OF MICROSERVICES FOR MICROSERVICE BASED APPLICATIONS

A plurality of executing microservices associated with respective features of an application are managed using a computer. The microservices are operating within a container orchestrator platform. Calls made to a plurality of microservices related to an application running on a container orchestrator platform are traces by the computer. A status map is generated by the computer of the plurality of microservices related to the application based on the tracing of the calls. The status map is published such that the status map is accessible to the plurality of microservices, and an action by one of the microservices of the plurality of microservices in response to the status map is initiated.