G06F9/544

USER-LEVEL SERVICES FOR MULTITENANT ISOLATION
20230054696 · 2023-02-23 ·

A shared computing system for serving a plurality of tenants using container pools. Each container pool has a filesystem service configured to service one or more applications within the container pool. A shared memory is used to facilitate interprocess communication between the application and the filesystem service, both of which along with the interprocess communication itself are run at user level.

SUB-QUEUE INSERTION SCHEMES EXECUTABLE BY QUEUE MANAGERS AND RELATED SYSTEMS AND OPERATIONS
20220365815 · 2022-11-17 ·

Introduced here are insertion schemes in which queues can be branched into one or more sub-queues for more effective management of queuing elements. Often, a computing device will have a primary buffer into which queuing elements are populated for execution by a processor. However, the amount of contiguous memory space allocated for the primary buffer may be fixed. To address this, a queue manager may insert indicators that link to secondary buffers into the primary buffer in order to expand the number of effective entries in the primary buffer.

APPLICATION PROGRAMMING INTERFACE TO DECOMPRESS DATA

Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.

Machine learning model updates to ML accelerators
11586578 · 2023-02-21 · ·

Examples herein describe a peripheral I/O device with a hybrid gateway that permits the device to have both I/O and coherent domains. As a result, the compute resources in the coherent domain of the peripheral I/O device can communicate with the host in a similar manner as CPU-to-CPU communication in the host. The dual domains in the peripheral I/O device can be leveraged for machine learning (ML) applications. While an I/O device can be used as an ML accelerator, these accelerators previously only used an I/O domain. In the embodiments herein, compute resources can be split between the I/O domain and the coherent domain where a ML engine is in the I/O domain and a ML model is in the coherent domain. An advantage of doing so is that the ML model can be coherently updated using a reference ML model stored in the host.

Method, system, and apparatus for processing parking, and vehicle controller

The present disclosure provides a method, system, and apparatus for processing parking and a vehicle controller, and relates to the field of intelligent transportation technology, specifically to the field of automated parking technology. The method is executed by a parking system deployed on a vehicle controller, the parking system including a perception module and other modules except the perception module; the perception module being deployed on a first operating system in the vehicle controller; and the other modules being deployed on a second operating system in the vehicle controller; the method includes: processing an image collected by an image collector through the perception module to obtain perception result data; and controlling a vehicle based on the perception result data obtained from the perception module by the other modules.

Devices, Methods, and Graphical User Interfaces for Automatically Providing Shared Content to Applications

A computer system receives, in a first messaging conversation by a first messaging application of a plurality of applications, information identifying a first shared content item. In response to receiving the information identifying the first shared content item, in accordance with a determination that the first shared content item is of a first type, the computer system automatically makes the first shared content item available within a first application of the plurality of applications, the first application is associated with content of the first type. In accordance with a determination that the first shared content item is of a second type, the computer system automatically makes the first shared content item available within a second application of the plurality of applications, wherein the second application is associated with content of the second type.

Near-memory acceleration for database operations

Despite the increase of memory capacity and CPU computing power, memory performance remains the bottleneck of in-memory database management systems due to ever-increasing data volumes and application demands. Because the scale of data workloads has out-paced traditional CPU caches and memory bandwidth, one can improve data movement from memory to computing units to improve performance in in-memory database scenarios. A near-memory database accelerator framework offloads data-intensive database operations via or to a near-memory computation engine. The database accelerator's system architecture can include a database accelerator software module/driver and a memory module with a database accelerator engine. An application programming interface (API) can be provided to support database accelerator functionality. Memory of the database accelerator can be directly accessible by the CPU.

Storage and access of neural network models of automotive predictive maintenance

Systems, methods and apparatus of optimizing neural network computations of predictive maintenance of vehicles. For example, a data storage device of a vehicle includes: a host interface configured to receive a sensor data stream from at least one sensor configured on the vehicle; at least one storage media component having a non-volatile memory; and a controller. The non-volatile memory is configured into multiple partitions (e.g., namespaces) having different sets of memory operation settings configured for different types of data related to an artificial neural network (ANN). The partitions include a model partition configured to store model data of the ANN. The sensor data stream is applied in the ANN to predict a maintenance service of the vehicle. The memory units of the model partition can be configured for read, infrequent updates, improved storage capacity, and/or for access in parallel with input/output for the ANN.

Scalable hardware thread scheduler

A device includes a hardware data processing node configured to execute a respective task, and a hardware thread scheduler including a hardware task scheduler. The hardware task scheduler is coupled to the hardware data processing node and has a producer socket, a consumer socket, and a spare socket. The spare socket is configured to provide data control signals also provided by a first socket of the producer and consumer sockets responsive to a memory-mapped register being a first value. The spare socket is configured to provide data control signals also provided by a second socket of the producer and consumer sockets responsive to the memory-mapped register being a second value.

Operating a controller in a motor vehicle according to different time slots

A method for operating a controller, including: executing a first task-program in a first time pattern of first time-slots, performing a second task-program in a second time pattern of second time-slots, ascertaining a status variable, indicating whether a result of the first task-program is released, ascertaining the result of the first task-program in an instantaneous first time-slot and transmitted in this slot to a memory area assigned to the second task-program, the second task-program ascertains a second result as a function of the status variable value and the result of the first task-program, the status variable value is transmitted in the instantaneous time-slot to a status memory area assigned to the second task-program, the result of the first task-program and the status variables values are ascertained and transmitted after the beginning of execution of the first task-program and before another execution of the first task-program.