Patent classifications
G06F9/30003
Data processing method and apparatus, and related product
The present disclosure provides a data processing method and an apparatus and a related product. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By utilizing the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
VIDEO DISPLAY DEVICE AND COOPERATIVE CONTROL METHOD IN VIDEO DISPLAY DEVICE
For establishing cooperation among an external device 30, a video display device and a remote control terminal and to achieve cooperative operation among these devices that affords a high level of operability to a user with less burden, the video display device to which an external device and a remote control terminal that remotely operates the external device can be connected, comprising: a connection detection section configured to detect a connection of the external device to the video display device; a cooperation establishment section configured to generate a start signal for initiate an application software accepting an operation instruction to the external device on the remote control terminal when the connection detection section detects the connection of the external device; and a communication interface configured to transmit the start signal to the remote control terminal.
Method, apparatus and system for data stream processing with a programmable accelerator
Techniques and mechanisms for programming an accelerator device to enable performance of a data processing algorithm. In an embodiment, an accelerator of a computer platform is programmed based on programming information received from a host processor of the computer platform. In another embodiment, programming of the accelerator is to enable data driven execution of an instruction by a data stream processing engine of the accelerator.
EXECUTION OF AN INSTRUCTION FOR PERFORMING A CONFIGURATION VIRTUAL TOPOLOGY CHANGE
In a logically partitioned host computer system comprising host processors (host CPUs) partitioned into a plurality of guest processors (guest CPUs) of a guest configuration, a perform topology function instruction is executed by a guest processor specifying a topology change of the guest configuration. The topology change preferably changes the polarization of guest CPUs, the polarization being related to the amount of a host CPU resource provided to a guest CPU.
Microprocessor with secure execution mode and store key instructions
A microprocessor conditionally grants a request to switch from a normal execution mode in which encrypted instructions cannot be executed, into a secure execution mode (SEM). Thereafter, the microprocessor executes a plurality of instructions, including a store-key instruction to write a set of one or more cryptographic key values into a secure memory of the microprocessor. After fetching an encrypted program from an instruction cache, the microprocessor decrypts the encrypted program into plaintext instructions using decryption logic within the microprocessor's instruction-processing pipeline.
IN-PIPE ERROR SCRUBBING WITHIN A PROCESSOR CORE
A supervisory hardware device in a processor core detects a flush instruction that, when executed, flushes content of one or more general purpose registers (GPRs) within the processor core. The content of the one or more GPRs is moved to a history buffer (HB) and an instruction sequencing queue (ISQ) within the processor core, where the content includes data, an instruction tag (iTag) that identifies an instruction that generated the data, and error correction code (ECC) bits for the data. In response to receiving a restore instruction, the supervisory hardware device error checks the data in the ISQ using the ECC bits stored in the ISQ. In response to detecting an error in the data in the ISQ, the supervisory hardware device sends the data and the ECC bits from the ISQ to an ECC scrubber to generate corrected data, which is restored into the one or more GPRs.
Instruction and Logic for Configurable Arithmetic Logic Unit Pipeline
A processor includes a front end including circuitry to decode a first instruction to set a performance register for an execution unit and a second instruction and an allocator including circuitry to assign the second instruction to the execution unit to execute the second instruction. The execution unit includes circuitry to select between a normal computation and an accelerated computation based on a mode field of the performance register, perform the selected computation, and select between a normal result associated with the normal computation and an accelerated result associated with the accelerated computation based on the mode field.
COMPILER FOR TRANSLATING BETWEEN A VIRTUAL IMAGE PROCESSOR INSTRUCTION SET ARCHITECTURE (ISA) AND TARGET HARDWARE HAVING A TWO-DIMENSIONAL SHIFT ARRAY STRUCTURE
A method is described that includes translating higher level program code including higher level instructions having an instruction format that identifies pixels to be accessed from a memory with first and second coordinates from an orthogonal coordinate system into lower level instructions that target a hardware architecture having an array of execution lanes and a shift register array structure that is able to shift data along two different axis. The translating includes replacing the higher level instructions having the instruction format with lower level shift instructions that shift data within the shift register array structure.
Migrating threads between asymmetric cores in a multiple core processor
Some implementations provide techniques and arrangements to migrate threads from a first core of a processor to a second core of the processor. For example, some implementations may identify one or more threads scheduled for execution at a processor. The processor may include a plurality of cores, including a first core having a first characteristic and a second core have a second characteristic that is different than the first characteristic. Execution of the one or more threads by the first core may be initiated. A determination may be made whether to apply a migration policy. In response to determining to apply the migration policy, migration of the one or more threads from the first core to the second core may be initiated.
Battery life extension via changes in transmission rates
Disclosed are techniques to conserve battery of an endpoint device. Example techniques include adjusting the size of messages transmitted by an endpoint device and/or adjusting the transmission rate of an endpoint device. In some configurations, the one or more criteria are used by an endpoint device to determine what data fields to include within a message and/or adjust a transmission rate associated with the transmission of messages by the endpoint device. For instance, the one or more criteria may include the battery level of the device, the time of year, whether the data has already been transmitted by the endpoint device, whether the data has been acknowledged as received by another device, whether the endpoint device has been instructed by another device to reduce the message size and/or adjust the transmission rate, and the like.