G06F9/28

Pivot rack
11249816 · 2022-02-15 · ·

Racks and rack systems to support a plurality of sleds are disclosed herein. A rack comprises an elongated support post and a plurality of support chassis. The elongated support post extends vertically. The plurality of support chassis are coupled to the elongated support post. Each support chassis of the plurality of support chassis is sized to house a corresponding sled of the plurality of sleds.

Managing parallel microservice requests

A method, computer program product, and system for managing parallel microservices are provided. The method may include identifying information pertaining to each of a plurality of target microservices to be invoked by an issuer microservice, a predefined condition associated with the plurality of target microservices, and an action to be executed by the issuer microservice in response to the predefined condition being satisfied. The method may also include sending a first request to available target microservices of the plurality of target microservices based on the information pertaining to the respective available target microservices. The method may also include, in response to receiving a response to the first request from an available target microservice of the available target microservices, determining whether the predefined condition is satisfied, and in response to determining that the predefined condition is satisfied, causing the action to be executed by the issuer microservice.

Managing parallel microservice requests

A method, computer program product, and system for managing parallel microservices are provided. The method may include identifying information pertaining to each of a plurality of target microservices to be invoked by an issuer microservice, a predefined condition associated with the plurality of target microservices, and an action to be executed by the issuer microservice in response to the predefined condition being satisfied. The method may also include sending a first request to available target microservices of the plurality of target microservices based on the information pertaining to the respective available target microservices. The method may also include, in response to receiving a response to the first request from an available target microservice of the available target microservices, determining whether the predefined condition is satisfied, and in response to determining that the predefined condition is satisfied, causing the action to be executed by the issuer microservice.

NPU implemented for artificial neural networks to process fusion of heterogeneous data received from heterogeneous sensors
11731656 · 2023-08-22 · ·

A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.

NPU implemented for artificial neural networks to process fusion of heterogeneous data received from heterogeneous sensors
11731656 · 2023-08-22 · ·

A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.

Memory expander, heterogeneous computing device using memory expander, and operation method of heterogenous computing

A memory expander includes a memory device that stores a plurality of task data. A controller controls the memory device. The controller receives metadata and a management request from an external central processing unit (CPU) through a compute express link (CXL) interface and operates in a management mode in response to the management request. In the management mode, the controller receives a read request and a first address from an accelerator through the CXL interface and transmits one of the plurality of task data to the accelerator based on the metadata in response to the read request.

Building automation system with an algorithmic interface application designer

A method for generating and updating a live dashboard of a building management system for a building includes generating a dashboard designer interface and causing the dashboard designer interface to be displayed on a user device of a user, receiving a graphic element from the user, wherein the graphic file supports animation and user interaction, generating a widget by binding the graphic element received from the user to a widget of the live dashboard, binding a data point to the widget based on a user selection via the dashboard designer interface, wherein the data point being a data point of building equipment of the building, receiving a value for the data point from the building equipment, and displaying the widget in the live dashboard, the widget including an indication of the value for the data point.

Building automation system with an algorithmic interface application designer

A method for generating and updating a live dashboard of a building management system for a building includes generating a dashboard designer interface and causing the dashboard designer interface to be displayed on a user device of a user, receiving a graphic element from the user, wherein the graphic file supports animation and user interaction, generating a widget by binding the graphic element received from the user to a widget of the live dashboard, binding a data point to the widget based on a user selection via the dashboard designer interface, wherein the data point being a data point of building equipment of the building, receiving a value for the data point from the building equipment, and displaying the widget in the live dashboard, the widget including an indication of the value for the data point.

TECHNOLOGIES FOR ALLOCATING RESOURCES ACROSS DATA CENTERS
20220012105 · 2022-01-13 ·

Technologies for allocating resources across data centers include a compute device to obtain resource utilization data indicative of a utilization of resources for a managed node to execute a workload. The compute device is also to determine whether a set of resources presently available to the managed node in a data center in which the compute device is located satisfies the resource utilization data. Additionally, the compute device is to allocate, in response to a determination that the set of resources presently available to the managed node does not satisfy the resource utilization data, a supplemental set of resources to the managed node. The supplemental set of resources are located in an off-premises data center that is different from the data center in which the compute device is located. Other embodiments are also described and claimed.

Parallel copying database transaction processing

A network device obtains, from a transaction queue, a plurality of transactions that do not conflict with each other, and performs reverse shallow copying in parallel for the transactions that do not conflict, to generate a plurality of temporary trees corresponding to the plurality of transactions. Because the plurality of transactions does not conflict with each other, processing the transactions in parallel can ensure accurate and proper transaction processing. In addition, generating the temporary trees in a reverse shallow copying manner can effectively reduce consumption of time and memory. Further, processing of the plurality of transactions is implemented by merging the plurality of temporary trees.