Patent classifications
G06F9/544
Memory management methods and systems
A method and an apparatus for determining a usage level of a memory device to notify a running application to perform memory reduction operations selected based on the memory usage level are described. An application calls APIs (Application Programming Interface) integrated with the application codes in the system to perform memory reduction operations. A memory usage level is determined according to a memory usage status received from the kernel of a system. A running application is associated with application priorities ranking multiple running applications statically or dynamically. Selecting memory reduction operations and notifying a running application are based on application priorities. Alternatively, a running application may determine a mode of operation to directly reduce memory usage in response to a notification for reducing memory usage without using API calls to other software.
Transmit power control
This disclosure describes systems, methods, and devices related to transmit power control (TPC). A device may identify a link measurement request frame from a first station device. The device may determine, for each transmit chain of the first station device, a TPC action to be performed by the first device. The device may cause to send a link measurement report frame comprising a value indicative of the TPC action for each transmit chain. The device may identify an acknowledgement from the first station device.
First-in first-out function for segmented data stream processing
A method of segmented media data processing can include receiving a first sequence of first segments partitioned from a first data stream of a streaming media, and storing the first segments into a first first-in first-out (FIFO) buffer. In the first FIFO buffer, each first segment and attributes associated with each first segment form an entry of the first FIFO buffer. The attributes associated with each first segment can include a start time of the respective first segment, a duration of the respective first segment, and a length of the respective first segment indicating a number of bytes in the respective first segment. The first segments received from the first FIFO buffer can be processed using a first media processing task of a workflow in a network-based media processing (NBMP) system. The first segments received from the first FIFO buffer can be processed independently from each other.
Chained buffers in neural network processor
Embodiments of the present disclosure relate to chained buffers in a neural processor circuit. The neural processor circuit includes multiple neural engines, a planar engine, a buffer memory, and a flow control circuit. At least one neural engine operates as a first producer of first data or a first consumer of second data. The planar engine operates as a second consumer receiving the first data from the first producer or a second producer sending the second data to the first consumer. Data flow between the at least one neural engine and the planar engine is controlled using at least a subset of buffers in the buffer memory operating as at least one chained buffer that chains flow of the first data and the second data between the at least one neural engine and the planar engine.
Arbitration scheme for coherent and non-coherent memory requests
A processor in a system is responsive to a coherent memory request buffer having a plurality of entries to store coherent memory requests from a client module and a non-coherent memory request buffer having a plurality of entries to store non-coherent memory requests from the client module. The client module buffers coherent and non-coherent memory requests and releases the memory requests based on one or more conditions of the processor or one of its caches. The memory requests are released to a central data fabric and into the system based on a first watermark associated with the coherent memory buffer and a second watermark associated with the non-coherent memory buffer.
Line interleaving controller, image signal processor and application processor including the same
An image signal processor includes a line interleaving controller and an image signal processor core. The line interleaving controller receives a plurality of image data lines included in an image frame, generates one or more virtual data lines corresponding to the image frame, and outputs the plurality of image data lines and the virtual data lines sequentially line by line. The image signal processor core includes at least one pipeline circuit. The pipe line circuit includes a plurality of processing modules serially connected to sequentially process data lines received from the line interleaving controller. The line interleaving controller processes one or more end image data lines included in an end portion of the image frame based on the virtual data lines. Interference or collision between channels is reduced or prevented by processing the end image data lines in synchronization with the virtual data lines.
A RING BUFFER WITH MULTIPLE HEAD POINTERS
Apparatuses and methods of operating such apparatuses are disclosed, where the apparatus provides ring buffer storage to hold queued elements. Multiple head pointers are stored and maintained with respect to the ring buffer, wherein the multiple head pointers have a multiplicity N. When a dequeuing operation is performed with respect to an element queued in the ring buffer, reference is made to a selected head pointer of the multiple head pointers and a slot index value is derived. An element held in a slot corresponding to the slot index value is dequeued and the value of the selected head pointer is increased by N. Support for concurrent dequeuing operations is thus provided, in that write contention for a single head pointer is avoided.
SYSTEMS, METHODS, AND APPARATUS FOR COORDINATING COMPUTATION SYSTEMS
A method for computation may include performing a first computation using a first system, wherein the first computation may be based, at least in part, on a first computation basis, performing a second computation using a second system, wherein the second computation may be based, at least in part, on a second computation basis, and coordinating the first computation and the second computation. The first computation basis may include a clock basis, and the second computation basis may include an event basis. The first computation may include a first operation, the second computation may include a second operation, and the coordinating the first computation and the second computation may include coordinating the first computation and the second computation based on the first operation and the second operation. The first operation may include an application computation operation, and the second operation may include a device computation operation.
ARTIFICIAL INTELLIGENCE ACCELERATORS
An artificial intelligence (AI) accelerator includes memory circuits configured to output weight data and vector data, a multiplication circuit/adder tree performing a multiplying/adding calculation on the weight data and the vector data to generate multiplication/addition result data, a first accumulator synchronized with an odd clock signal to perform an accumulative adding calculation on odd-numbered multiplication/addition result data of the multiplication/addition result data and a first latched data, and a second accumulator synchronized with an even clock signal to perform an accumulative adding calculation on even-numbered multiplication/addition result data of the multiplication/addition result data and a second latched data.
INTERPROCESSOR PROCEDURE CALLS
A firewall host uses a shared memory to pass arguments to, and receive results from, a remote procedure executing on a locally coupled network processing unit that offloads processing for the firewall.