Patent classifications
G06F15/7842
Chip and interface conversion device
A chip and an interface conversion device are provided. The chip includes first, second, third, fourth, fifth and sixth pads. The first and second pads are coupled to first and second SBU pins of a USB connector respectively. The fourth and the sixth pads are coupled to first and second pins of an AUX channel of a DP connector respectively. When the chip operates in a first mode, first and second AUX channel signals generated by the chip are transmitted to the third and fifth pads respectively, a voltage of the fourth pad is weakly pulled down, and a voltage of the sixth pad is weakly pulled up. When the chip operates in a second mode, one of the first and second pads is connected to the fourth pad, and the other one of the first and second pads is connected to the sixth pad.
DYNAMIC PROCESSING MEMORY CORE ON A SINGLE MEMORY CHIP
Embodiments of the present invention provide a method for incorporating a dynamic processing memory core into a single memory chip to enable computational processing and memory storage from the single memory chip. The method includes storing data elements by memory storage devices positioned on the single memory chip. The method also includes executing, by a processing devices positioned on the single memory chip, memory instructions. The method also includes transitioning the dynamic memory processing core from a memory storage device to a processing device by instructing the processing device to execute the memory instructions. The method also includes transitioning the dynamic processing memory core from the processing device to the memory storage device by instructing the processing device to not execute the memory instructions thereby terminating the computational processing of the dynamic processing memory core and maintaining the memory storage provided by the memory storage device.
METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR COMPUTE-IN-MEMORY MACRO ARRANGEMENT, AND ELECTRONIC DEVICE APPLYING THE SAME
A method and a non-transitory computer readable medium for CIM arrangement, and an electronic device applying the same are proposed. The method for CIM arrangement includes to obtain information of the number of CIM macros and information of the dimension of each of the CIM micros, to obtain information of the number of input channels and the number of output channels of a designated convolutional layer of a designate neural network, and to determine a CIM macro arrangement for arranging the CIM macros according to the number of the CIM macros, the dimension of each of the CIM macros, the number of the input channels and the number of the output channels of the designated convolutional layer of the designated neural network, for applying convolution operation to the input channels to generate the output channels.
Architecture to support synchronization between core and inference engine for machine learning
A system to support a machine learning (ML) operation comprises a core configured to receive and interpret commands into a set of instructions for the ML operation and a memory unit configured to maintain data for the ML operation. The system further comprises an inference engine having a plurality of processing tiles, each comprising an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform tasks of the ML operation on the data in the OCM. The system also comprises an instruction streaming engine configured to distribute the instructions to the processing tiles to control their operations and to synchronize data communication between the core and the inference engine so that data transmitted between them correctly reaches the corresponding processing tiles while ensuring coherence of data shared and distributed among the core and the OCMs.
POWER MANAGEMENT CIRCUIT AND SYSTEM THEREOF
A power management circuit and system thereof are provided. The power management circuit includes M×N computing units, a first power supply unit, a second power supply unit and N-1 connection interfaces. M and N are both natural numbers greater than 1. The first power supply unit supplies power to the computing units of the Nth row, the computing units of the Nth row supply power to the computing units of the (N-1)th row, respectively, and so on until the computing units of the 2nd row supply power to the computing units of the 1st row, respectively. The second power supply unit supplies power to the M×N computing units, and the N-1 connection interfaces coupled to corresponding computing units of the 1st column of the M×N computing units, respectively.
PROCESSOR MEMORY ACCESS
A computing device comprising: a plurality of ALUs; a set of registers; a memory; a memory interface between the registers and the memory; a control unit controlling the ALUs by generating: at least one cycle i including both implementing at least one first computing operation by way of an arithmetic logic unit and downloading a first dataset from the memory to at least one register; at least one cycle ii, following the at least one cycle i, including implementing a second computing operation by way of an arithmetic logic unit, for which second computing operation at least part of the first dataset forms at least one operand.
Secure system on chip
Disclosed is a secure semiconductor chip. The semiconductor chip is, for example, a system-on-chip. The system-on-chip is operated by connecting normal IPs to a processor core included therein via a system bus. A secure bus, which is a hidden bus physically separated from the system bus, is separately provided. Security IPs for performing a security function or handling security data are connected to the secure bus. The secure semiconductor chip can perform required authentication while shifting between a normal mode and a secure mode.
Technologies for providing a scalable architecture for performing compute operations in memory
Technologies for providing a scalable architecture to efficiently perform compute operations in memory include a memory having media access circuitry coupled to a memory media. The media access circuitry is to access data from the memory media to perform a requested operation, perform, with each of multiple compute logic units included in the media access circuitry, the requested operation concurrently on the accessed data, and write, to the memory media, resultant data produced from execution of the requested operation.
Caching for heterogeneous processors
A multi-core processor providing heterogeneous processor cores and a shared cache is presented.
Secure semiconductor chip and operating method thereof
A semiconductor chip may comprise: a processor for processing data; a shield which includes a metal line and is arranged over an upper portion of the processor; a detection unit for comparing a reference signal with an output signal, which is outputted when the reference signal passes through the shield, so as to detect whether there has been a wiring change within the shield or not; and a controller for configuring the routing topology of the metal line to be in a first state, and changing the routing topology from the first state to a second state.