Patent classifications
G06F9/302
Display apparatus and controlling method thereof
A display apparatus includes a cabinet; a light emitting diode module provided in the cabinet; a communicator configured to receive an ambient atmospheric measurement value of the cabinet from an external device; a temperature sensor provided in the light emitting diode module; and a temperature controller configured to control a temperature of the light emitting diode module; and a controller configured to control the temperature controller so that the temperature of the light emitting diode module is greater than a dew point temperature when the measured temperature based on the output of the temperature sensor in a standby mode in a power saving state is equal to or lower than the dew point temperature based on the ambient atmospheric measurement value, and drives the light emitting diode module so that the temperature of the light emitting diode module is greater than the dew point temperature when the measured temperature is below the dew point temperature based on the ambient atmospheric measurement value when switching to an active mode in a normal power supply state.
Vector processing unit
A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
Systems and methods for executing a fused multiply-add instruction for complex numbers
Disclosed embodiments relate to executing a vector-complex fused multiply-add Instruction. In one example, a method includes fetching an instruction, a format of the instruction including an opcode, a first source operand identifier, a second source operand identifier, and a destination operand identifier, wherein each of the identifiers identifies a location storing a packed data comprising at least one complex number, decoding the instruction, retrieving data associated with the first and second source operand identifiers, and executing the decoded instruction to, for each packed data element position of the identified first and second source operands, cross-multiply the real and imaginary components to generate four products: a product of real components, a product of imaginary components, and two mixed products, generate a complex result by using the four products according to the instruction, and store a result to the corresponding position of the identified destination operand.
Architecture to support color scheme-based synchronization for machine learning
A system to support a machine learning (ML) operation comprises an array-based inference engine comprising a plurality of processing tiles each comprising at least one or more of an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform one or more computation tasks on the data in the OCM by executing a set of task instructions. The system also comprises a data streaming engine configured to stream data between a memory and the OCMs and an instruction streaming engine configured to distribute said set of task instructions to the corresponding processing tiles to control their operations and to synchronize said set of task instructions to be executed by each processing tile, respectively, to wait current certain task at each processing tile to finish before starting a new one.
Tile subsystem and method for automated data flow and data processing within an integrated circuit architecture
A system and method for a computing tile of a multi-tiled integrated circuit includes a plurality of distinct tile computing circuits, wherein each of the plurality of distinct tile computing circuits is configured to receive fixed-length instructions; a token-informed task scheduler that: tracks one or more of a plurality of distinct tokens emitted by one or more of the plurality of distinct tile computing circuits; and selects a distinct computation task of a plurality of distinct computation tasks based on the tracking; and a work queue buffer that: contains a plurality of distinct fixed-length instructions, wherein each one of the fixed-length instructions is associated with one of the plurality of distinct computation tasks; and transmits one of the plurality of distinct fixed-length instructions to one or more of the plurality of distinct tile computing circuits based on the selection of the distinct computation task by the token-informed task scheduler.
Arithmetic logic unit with normal and accelerated performance modes using differing numbers of computational circuits
A processor includes a front end including circuitry to decode a first instruction to set a performance register for an execution unit and a second instruction, and an allocator including circuitry to assign the second instruction to the execution unit to execute the second instruction. The execution unit includes circuitry to select between a normal computation and an accelerated computation based on a mode field of the performance register, perform the selected computation, and select between a normal result associated with the normal computation and an accelerated result associated with the accelerated computation based on the mode field.
Decimal load immediate instruction
An instruction generates a value for use in processing within a computing environment. The instruction obtains a sign control associated with the instruction, and shifts an input value of the instruction in a specified direction by a selected amount to provide a result. The result is placed in a first designated location in a register, and the sign, which is based on the sign control, is placed in a second designated location of the register. The result and the sign provide a signed value to be used in processing within the computing environment.
Computation engine with strided dot product
In an embodiment, a computation engine may perform dot product computations on input vectors. The dot product operation may have a first operand and a second operand, and the dot product may be performed on a subset of the vector elements in the first operand and each of the vector elements in the second operand. The subset of vector elements may be separated in the first operand by a stride that skips one or more elements between each element to which the dot product operation is applied. More particularly, in an embodiment, the input operands of the dot product operation may be a first vector having second vectors as elements, and the stride may select a specified element of each second vector.
Vector processing unit
A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
Neural network optimization mechanism
- Narayan Srinivasa ,
- Joydeep Ray ,
- Nicolas C. Galoppo Von Borries ,
- Ben Ashbaugh ,
- Prasoonkumar Surti ,
- Feng Chen ,
- Barath Lakshmanan ,
- Elmoustapha Ould-Ahmed-Vall ,
- Liwei Ma ,
- Linda L. Hurd ,
- Abhishek R. Appu ,
- John C. Weast ,
- Sara S. Baghsorkhi ,
- Justin E. Gottschlich ,
- Chandrasekaran Sakthivel ,
- Farshad Akhbari ,
- Dukhwan Kim ,
- Altug Koker ,
- Nadathur Rajagopalan Satish
An apparatus to facilitate optimization of a neural network (NN) is disclosed. The apparatus includes optimization logic to define a NN topology having one or more macro layers, adjust the one or more macro layers to adapt to input and output components of the NN and train the NN based on the one or more macro layers.