Neuromorphic accelerator multitasking

    公开(公告)号:US11366998B2

    公开(公告)日:2022-06-21

    申请号:US15937486

    申请日:2018-03-27

    Abstract: Systems and techniques for neuromorphic accelerator multitasking are described herein. A neuron address translation unit (NATU) may receive a spike message. Here, the spike message includes a physical neuron identifier (PNID) of a neuron causing the spike. The NATU may then translate the PNID into a network identifier (NID) and a local neuron identifier (LNID). The NATU locates synapse data based on the NID and communicates the synapse data and the LNID to an axon processor.

    SPIKING NEURAL NETWORK ACCELERATOR USING EXTERNAL MEMORY

    公开(公告)号:US20190042920A1

    公开(公告)日:2019-02-07

    申请号:US15853282

    申请日:2017-12-22

    Abstract: System configurations and techniques for implementation of a neural network in neuromorphic hardware with use of external memory resources are described herein. In an example, a system for processing spiking neural network operations includes: a plurality of neural processor clusters to maintain neurons of the neural network, with the clusters including circuitry to determine respective states of the neurons and internal memory to store the respective states of the neurons; and a plurality of axon processors to process synapse data of synapses in the neural network, with the processors including circuitry to retrieve synapse data of respective synapses from external memory, evaluate the synapse data based on a received spike message, and propagate another spike message to another neuron based on the synapse data. Further details for use and access of the external memory and processing configurations for such neural network operations are also disclosed.

    PROCEDURAL NEURAL NETWORK SYNAPTIC CONNECTION MODES

    公开(公告)号:US20190042915A1

    公开(公告)日:2019-02-07

    申请号:US15941621

    申请日:2018-03-30

    Abstract: Systems and techniques for procedural neural network synaptic connection modes are described herein. A synapse list header may be loaded based on a received spike indication. A spike target generator may then execute a generator function identified in the synapse list header to produce a spike message. Here, the generator function accepts a current synapse value as input to produce the spike message. The spike message may then be communicated a neuron.

    NEUROMORPHIC ACCELERATOR MULTITASKING
    5.
    发明申请

    公开(公告)号:US20190042930A1

    公开(公告)日:2019-02-07

    申请号:US15937486

    申请日:2018-03-27

    Abstract: Systems and techniques for neuromorphic accelerator multitasking are described herein. A neuron address translation unit (NATU) may receive a spike message. Here, the spike message includes a physical neuron identifier (PNID) of a neuron causing the spike. The NATU may then translate the PNID into a network identifier (NID) and a local neuron identifier (LNID). The NATU locates synapse data based on the NID and communicates the synapse data and the LNID to an axon processor.

    HARDWARE PROCESSOR HAVING MULTIPLE MEMORY PREFETCHERS AND MULTIPLE PREFETCH FILTERS

    公开(公告)号:US20240111679A1

    公开(公告)日:2024-04-04

    申请号:US17958334

    申请日:2022-10-01

    CPC classification number: G06F12/0862 G06F9/3455 G06F12/0882

    Abstract: Techniques for prefetching by a hardware processor are described. In certain examples, a hardware processor includes execution circuitry, cache memories, and prefetcher circuitry. The execution circuitry is to execute instructions to access data at a memory address. The cache memories include a first cache memory at a first cache level and a second cache memory at a second cache level. The prefetcher circuitry is to prefetch the data from a system memory to at least one of the plurality of cache memories, and it includes a first-level prefetcher to prefetch the data to the first cache memory, a second-level prefetcher to prefetch the data to the second cache memory, and a plurality of prefetch filters. One of the prefetch filters is to filter exclusively for the first-level prefetcher. Another of the prefetch filters is to maintain a history of demand and prefetch accesses to pages in the system memory and to use the history to provide training information to the second-level prefetcher.

    Spiking neural network accelerator using external memory

    公开(公告)号:US11593623B2

    公开(公告)日:2023-02-28

    申请号:US15853282

    申请日:2017-12-22

    Abstract: System configurations and techniques for implementation of a neural network in neuromorphic hardware with use of external memory resources are described herein. In an example, a system for processing spiking neural network operations includes: a plurality of neural processor clusters to maintain neurons of the neural network, with the clusters including circuitry to determine respective states of the neurons and internal memory to store the respective states of the neurons; and a plurality of axon processors to process synapse data of synapses in the neural network, with the processors including circuitry to retrieve synapse data of respective synapses from external memory, evaluate the synapse data based on a received spike message, and propagate another spike message to another neuron based on the synapse data. Further details for use and access of the external memory and processing configurations for such neural network operations are also disclosed.

Patent Agency Ranking