Patent classifications
G06F30/331
Integrated sensor device with deep learning accelerator and random access memory
Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated sensor device may be configured to execute instructions with matrix operands and configured with: a sensor to generate measurements of stimuli; random access memory to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; a host interface connectable to a host system; and a controller to store the measurements generated by the sensor into the random access memory as an input to the Artificial Neural Network. After the Deep Learning Accelerator generates in the random access memory an output of the Artificial Neural Network by executing the instructions to process the input, the controller may communicate the output to a host system through the host interface.
Compiler-driver programmable device virtualization in a computing system
Examples provide a method of virtualizing a hardware accelerator in a virtualized computing system. The virtualized computing system includes a hypervisor supporting execution of a plurality of virtual machines (VMs). The method includes: receiving a plurality of sub-programs at a compiler in the hypervisor from a plurality of compilers in the respective plurality of VMs, each of the sub-programs including a hardware-description language (HDL) description; combining, at the compiler in the hypervisor, the plurality of sub-programs into a monolithic program; generating, by the compiler in the hypervisor, a circuit implementation for the monolithic program, the circuit implementation including a plurality of sub-circuits for the respective plurality of sub-programs; and loading, by the compiler in the hypervisor, the circuit implementation to a programmable device of the hardware accelerator.
Compiler-driver programmable device virtualization in a computing system
Examples provide a method of virtualizing a hardware accelerator in a virtualized computing system. The virtualized computing system includes a hypervisor supporting execution of a plurality of virtual machines (VMs). The method includes: receiving a plurality of sub-programs at a compiler in the hypervisor from a plurality of compilers in the respective plurality of VMs, each of the sub-programs including a hardware-description language (HDL) description; combining, at the compiler in the hypervisor, the plurality of sub-programs into a monolithic program; generating, by the compiler in the hypervisor, a circuit implementation for the monolithic program, the circuit implementation including a plurality of sub-circuits for the respective plurality of sub-programs; and loading, by the compiler in the hypervisor, the circuit implementation to a programmable device of the hardware accelerator.
Systems and methods for enhanced compression of trace data in an emulation system
A trace subsystem of an emulation system may generate differential frame data based upon successive frames. If one compression mode, the trace subsystem may set a flag bit and store differential frame data if there is at least one non-zero bit in the differential frame data. If the differential frame data includes only zero bits, the trace subsystem may set the flag bit without storing the frame data. In another compression mode, the computer may further compress the differential data if the frame data includes one (one-hot) or two (two-hot) non-zero bits. The controller may set flag bits to indicate one of all-zeroes, one-hot, two-hot, and random data conditions (more than two non-zero bits). For one-hot or two-hot conditions, the controller may store bits indicating the positions of the non-zero bits. For random data conditions, the controller may store the entire differential frame.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, USE METHOD OF INFORMATION PROCESSING APPARATUS, USER TERMINAL, AND PROGRAM THEREFOR
According to this invention, it is possible to reduce the load of the user in a work of operating a model described in a hardware description language, and allow the user to readily make a change. This invention provides an information processing apparatus including a hardware processor that emulates, by hardware, operations corresponding to a model described in a hardware description language, and a control unit that controls, in accordance with instructions of a user received from a user terminal, at least one of inputs to the hardware processor and outputs from the hardware processor.
Simulation generation using temporal logic
Techniques for determining simulations to confirm programmatic logic are discussed herein. Such simulations may be used to identify errors in programmatic logic. As an example, a system may simulate an autonomous vehicle operating in an environment by setting various initialization parameters. Temporal logic, such as Linear Temporal Logic (LTL) and/or Signal Temporal Logic (STL) may be used to determine a numeric cost associated with how closely one or more policies are violated for each simulation of a group of simulations. Based on the costs computed, additional sets of simulations may be created using an evolutionary algorithm. Flaws in programmatic logic controlling the system may be identified based on the evolutionary algorithms and cost defined.
Simulation generation using temporal logic
Techniques for determining simulations to confirm programmatic logic are discussed herein. Such simulations may be used to identify errors in programmatic logic. As an example, a system may simulate an autonomous vehicle operating in an environment by setting various initialization parameters. Temporal logic, such as Linear Temporal Logic (LTL) and/or Signal Temporal Logic (STL) may be used to determine a numeric cost associated with how closely one or more policies are violated for each simulation of a group of simulations. Based on the costs computed, additional sets of simulations may be created using an evolutionary algorithm. Flaws in programmatic logic controlling the system may be identified based on the evolutionary algorithms and cost defined.
METHOD AND SYSTEM FOR PROCESSING SIMULATION DATA
The present invention discloses a method and system for processing simulation data. The method includes: simultaneously collecting the simulation waveform data of said multiple FPGAs and adding a time stamp to the waveform data of each FPGA collected in each collection period, and storing the waveform data of the multiple FPGAs in the form of a link list according to the time stamp. The technical solution of the present invention can ensure no disorder of the waveform data of multiple FPGAs.
MODULAR SYSTEM (SWITCHBOARDS AND MID-PLANE) FOR SUPPORTING 50G OR 100G ETHERNET SPEEDS OF FPGA+SSD
A chassis front-end is disclosed. The chassis front-end may include a switchboard including an Ethernet switch, a Baseboard Management Controller, and a mid-plane connector. The chassis front-end may also include a mid-plane including at least one storage device connector and a speed logic to inform at least one storage device of an Ethernet speed of the chassis front-end. The Ethernet speeds may vary.
MODULAR SYSTEM (SWITCHBOARDS AND MID-PLANE) FOR SUPPORTING 50G OR 100G ETHERNET SPEEDS OF FPGA+SSD
A chassis front-end is disclosed. The chassis front-end may include a switchboard including an Ethernet switch, a Baseboard Management Controller, and a mid-plane connector. The chassis front-end may also include a mid-plane including at least one storage device connector and a speed logic to inform at least one storage device of an Ethernet speed of the chassis front-end. The Ethernet speeds may vary.