G01R31/318357

Multiple defect diagnosis method and machine readable media

A multiple defect diagnosis method includes: receiving a gate-level netlist of a chip, a plurality of test patterns and a plurality of test failure reports; deriving a plurality of seed nets from the gate-level netlist according to the plurality of test patterns and the plurality of test failure reports; utilizing a processor to compute similarity between the plurality of seed nets, and accordingly merging the plurality of seed nets to obtain a single seed net tree; and deriving at least one suspected seed net according to the single seed net tree.

System and method for predicting compatibility of a new unit for an existing system

A prediction system predicts compatibility of an existing test and measurement setup with a potential extension unit based on digital signatures. The prediction system includes a receiving unit, a processing unit, and a display unit. The receiving unit is configured to receive a digital signature from the potential extension unit and to forward the digital signature to the processing unit. The processing unit is configured to process the digital signature in order to predict whether the existing test and measurement setup is compatible with the potential extension unit. The processing unit is further configured to forward the result of the prediction to the display unit so that the result of the prediction is displayed. In addition, a method for predicting compatibility of an existing test and measurement setup with a potential extension unit based on digital signatures is described.

Integrating machine learning delay estimation in FPGA-based emulation systems

A method or system for estimating delays in design under tests (DUTs) using machine learning. The system accesses multiple DUTs, each comprising various logic blocks. For each DUT, a combinatorial path is identified, connecting one or more logic blocks. A feature vector is generated, including values of orthogonal features representing the combinatorial path's characteristics. Each DUT is compiled for emulation, and the delay of its combinatorial path is measured. These measured delays, along with the corresponding feature vectors, are used to train a machine learning delay model. The trained model is designed to receive a combinatorial path of a DUT as input and generate an estimated wire delay as output. This approach leverages machine learning to predict delays in electronic designs, improving the efficiency and accuracy of delay estimations in complex circuits.

DATA ANALYSIS APPARATUS AND DATA ANALYSIS METHOD
20180060464 · 2018-03-01 · ·

There is provided a data analysis apparatus, comprising an event occurrence setting module configured to cause a prescribed event to occur in a simulation for a work order that includes a process at which the prescribed event is to occur an event occurrence detection timing setting module configured to store an event occurrence detection timing indicating a time period between an occurrence of an event and detection of the event, a simulation executing processing module configured to execute a simulation when an occurrence of the event is detected, the simulation executing processing module being configured to execute a simulation that reflects an effect on the process when the event is addressed in accordance with the event occurrence detection timing recorded in the storage module, and a KPI calculating module configured to calculate a KPI of the process for the event occurrence detection timing, based on results of the simulation.

TIMING-AWARE TEST GENERATION AND FAULT SIMULATION

Disclosed herein are exemplary methods, apparatus, and systems for performing timing-aware automatic test pattern generation (ATPG) that can be used, for example, to improve the quality of a test set generated for detecting delay defects or holding time defects. In certain embodiments, timing information derived from various sources (e.g. from Standard Delay Format (SDF) files) is integrated into an ATPG tool. The timing information can be used to guide the test generator to detect the faults through certain paths (e.g., paths having a selected length, or range of lengths, such as the longest or shortest paths). To avoid propagating the faults through similar paths repeatedly, a weighted random method can be used to improve the path coverage during test generation. Experimental results show that significant test quality improvement can be achieved when applying embodiments of timing-aware ATPG to industrial designs.

Automated waveform analysis using a parallel automated development system

A mixed signal testing system capable of testing differently configured units under test (UUT) includes a controller, a test station and an interface system that support multiple UUTs. The test station includes independent sets of channels configured to send signals to and receive signals from each UUT being tested and signal processing subsystems that direct stimulus signals to a respective set of channels and receive signals in response thereto. The signal processing subsystems enable simultaneous and independent directing of stimulus signals through the sets of channels to each UUT and reception of signals from each UUT in response to the stimulus signals. Received signals responsive to stimulus signals provided to a fully functional UUT (with and without induced faults) are used to assess presence or absence of faults in the UUT being tested which may be determined to include one or more faults or be fault-free, i.e., fully functional.

SYSTEM AND METHOD FOR DECIMATED SWEEP MEASUREMENTS OF A DEVICE UNDER TEST USING MACHINE LEARNING
20250102573 · 2025-03-27 ·

A test and measurement instrument includes one or more ports to allow the test and measurement instrument to receive a signal from a device under test (DUT), a user interface to allow the user to send inputs to the test and measurement instrument and receive results, and one or more processors configured to acquire the signal from the DUT, make measurements on the signal to create a decimated measurement set, convert the decimated measurement set into a tensor, send the tensor to a machine learning network, and receive a pass/fail value from the machine learning network. A method includes acquiring a signal from a device under test (DUT), making measurements on the signal to create a decimated measurement set, convert the decimated measurement set into a tensor, sending the tensor to a machine learning network, and receiving a pass/fail value from the machine learning network.

SOC CHIP DISTRIBUTED SIMULATION AND VERIFICATION PLATFORM AND METHOD
20250076378 · 2025-03-06 ·

The present disclosure discloses an SoC chip distributed simulation and verification platform and a method, and the present disclosure relates to the field of chip verification technologies. The distributed simulation and verification platform includes component modules of an SoC chip; each module has its own verification platform, and each verification platform separately runs in a different simulation process; and virtual connections between the modules are implemented through respective verification platforms, to implement system function simulation and verification. In the present disclosure, a virtual connection technology is used to connect Testbench test platforms of the modules or IPs, to implement virtual integration of the modules or IPs, thereby completing distributed simulation and verification of a system function of the SoC chip.

Identifying defect sensitive codes for testing devices with input or output code

In one embodiment, a method of operating a computational system to evaluate a device under test, where the device under test is operable to receive a digital code input and output in response a corresponding output. The method injects a plurality of simulated faults into a pre-silicon model of the device under test. For each injected simulated fault, the method inputs a plurality of digital codes to the model. For each input digital code, the method selectively stores the input digital code if a difference, between a corresponding output for the input digital code and a no-fault output for the input, exceeds a predetermined threshold value.

DYNAMIC SELECTION METHOD, SYSTEM AND DEVICE FOR DATA REGION APPLIED TO INTEGRATED CIRCUIT DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
20250258223 · 2025-08-14 ·

The present application provides a dynamic selection method, system and device for a data region applied to an integrated circuit device, and a computer-readable storage medium. Through the technical solutions provided by the present application, the dynamic selection for the data region can be achieved, the strict dependence on an original dataset in the data region selection process is removed, the complexity of data region selection for different integrated circuit devices is simplified, and the universality and reusability of data region selection are improved. The application range is wide, and the popularization value is achieved.