ADAPTIVE TEST GENERATION FOR FUNCTIONAL COVERAGE CLOSURE

    公开(公告)号:US20250165689A1

    公开(公告)日:2025-05-22

    申请号:US18840532

    申请日:2023-02-28

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus for adaptively generating test stimuli for testing a hardware design for an integrated circuit. In one aspect, a system comprises one or more computers configured to obtain graph data representing a coverage dependency graph associated with a hardware design for an integrated circuit. At each of a plurality of simulation cycles, the one or more computers obtain a set of coverage statistics as of the simulation cycle and update respective distribution constraints for one or more random variables in a set of random variables using the coverage dependency graph and the coverage statistics. After the updating, the one or more computers generate one or more test stimuli by, for each test stimulus, sampling a respective value for each of the random variables based on the respective distribution constraints. The one or more computers simulate a performance of the integrated circuit for each of the test stimuli.

    Computational graph optimization
    2.
    发明授权

    公开(公告)号:US12205038B2

    公开(公告)日:2025-01-21

    申请号:US18321691

    申请日:2023-05-22

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for optimizing the execution of the operations of a neural network. One of the methods includes obtaining data representing a graph characterizing a plurality of operations of a neural network, wherein each node of the graph characterizes an operation of the neural network and each edge of the graph characterizes data dependency between the operations; processing the data representing the graph using a graph embedding neural network to generate an embedding of the graph; and processing the embedding of the graph using a policy neural network to generate a task output, wherein the task output comprises, for each of the plurality of operations of the neural network, a respective decision for a particular optimization task.

    Mixture of experts neural networks

    公开(公告)号:US12067476B2

    公开(公告)日:2024-08-20

    申请号:US18244171

    申请日:2023-09-08

    Applicant: Google LLC

    CPC classification number: G06N3/045 G06N3/08

    Abstract: A system includes a neural network that includes a Mixture of Experts (MoE) subnetwork between a first neural network layer and a second neural network layer. The MoE subnetwork includes multiple expert neural networks. Each expert neural network is configured to process a first layer output generated by the first neural network layer to generate a respective expert output. The MoE subnetwork further includes a gating subsystem that selects, based on the first layer output, one or more of the expert neural networks and determine a respective weight for each selected expert neural network, provides the first layer output as input to each of the selected expert neural networks, combines the expert outputs generated by the selected expert neural networks in accordance with the weights for the selected expert neural networks to generate an MoE output, and provides the MoE output as input to the second neural network layer.

    GENERATING INTEGRATED CIRCUIT FLOORPLANS USING NEURAL NETWORKS

    公开(公告)号:US20230394203A1

    公开(公告)日:2023-12-07

    申请号:US18310427

    申请日:2023-05-01

    Applicant: Google LLC

    CPC classification number: G06F30/27 G06F30/392

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a computer chip floorplan. One of the methods includes obtaining netlist data for a computer chip; and generating a computer chip floorplan, comprising placing a respective node at each time step in a sequence comprising a plurality of time steps, the placing comprising, for each time step: generating an input representation for the time step; processing the input representation using a node placement neural network having a plurality of network parameters, wherein the node placement neural network is configured to process the input representation in accordance with current values of the network parameters to generate a score distribution over a plurality of positions on the surface of the computer chip; and assigning the node to be placed at the time step to a position from the plurality of positions using the score distribution.

    Generating integrated circuit floorplans using neural networks

    公开(公告)号:US11100266B2

    公开(公告)日:2021-08-24

    申请号:US16889130

    申请日:2020-06-01

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a computer chip floorplan. One of the methods includes obtaining netlist data for a computer chip; and generating a computer chip floorplan, comprising placing a respective node at each time step in a sequence comprising a plurality of time steps, the placing comprising, for each time step: generating an input representation for the time step; processing the input representation using a node placement neural network having a plurality of network parameters, wherein the node placement neural network is configured to process the input representation in accordance with current values of the network parameters to generate a score distribution over a plurality of positions on the surface of the computer chip; and assigning the node to be placed at the time step to a position from the plurality of positions using the score distribution.

    Mixture of experts neural networks

    公开(公告)号:US11790214B2

    公开(公告)日:2023-10-17

    申请号:US16879187

    申请日:2020-05-20

    Applicant: Google LLC

    CPC classification number: G06N3/045 G06N3/08

    Abstract: A system includes a neural network that includes a Mixture of Experts (MoE) subnetwork between a first neural network layer and a second neural network layer. The MoE subnetwork includes multiple expert neural networks. Each expert neural network is configured to process a first layer output generated by the first neural network layer to generate a respective expert output. The MoE subnetwork further includes a gating subsystem that selects, based on the first layer output, one or more of the expert neural networks and determine a respective weight for each selected expert neural network, provides the first layer output as input to each of the selected expert neural networks, combines the expert outputs generated by the selected expert neural networks in accordance with the weights for the selected expert neural networks to generate an MoE output, and provides the MoE output as input to the second neural network layer.

Patent Agency Ranking