-
公开(公告)号:GB2604963A
公开(公告)日:2022-09-21
申请号:GB202114617
申请日:2021-10-13
Applicant: IBM
Inventor: ALEXANDER ANDREOPOULOS , DHARMENDRA S MODHA , CARMELO DI NOLFO , MYRON D FLICKNER , ANDREW STEPHEN CASSIDY , BRIAN SEISHO TABA , PALLAB DATTA , RATHINAKUMAR APPUSWAMY , JUN SAWADA
IPC: G06N3/063 , G06F30/3308 , G06N3/04
Abstract: Simulation and validation of neural network systems is provided. In various embodiments, a description of an artificial neural network is read. A directed graph is constructed comprising a plurality of edges and a plurality of nodes, each of the plurality of edges corresponding to a queue and each of the plurality of nodes corresponding to a computing function of the neural network system. A graph state is updated over a plurality of time steps according to the description of the neural network, the graph state being defined by the contents of each of the plurality of queues. Each of a plurality of assertions is tested at each of the plurality of time steps, each of the plurality of assertions being a function of a subset of the graph state. Invalidity of the neural network system is indicated for each violation of one of the plurality of assertions.
-
公开(公告)号:GB2586763B
公开(公告)日:2021-08-11
申请号:GB202018196
申请日:2019-03-28
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , MYRON DALE FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , JOHN VERNON` ARTHUR , DHARMENDRA SHANTILAL MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO
IPC: G06N3/063
Abstract: Neural inference processors are provided. In various embodiments, a processor includes a plurality of cores. Each core includes a neural computation unit, an activation memory, and a local controller. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The activation memory is adapted to store the input activations and the output activations. The local controller is adapted to load the input activations from the activation memory to the neural computation unit and to store the plurality of output activations from the neural computation unit to the activation memory. The processor includes a neural network model memory adapted to store network parameters, including the plurality of synaptic weights. The processor includes a global scheduler operatively coupled to the plurality of cores, adapted to provide the synaptic weights from the neural network model memory to each core.
-
公开(公告)号:GB2590888A
公开(公告)日:2021-07-07
申请号:GB202106472
申请日:2019-09-25
Applicant: IBM
Inventor: JOHN VERNON` ARTHUR , ANDREW STEPHEN CASSIDY , MYRON FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , DHARMENDRA MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO
Abstract: Systems for neural network computation are provided. A neural network processor comprises a plurality of neural cores. The neural network processor has one or more processor precisions per activation. The processor is configured to accept data having a processor feature dimension. A transformation circuit is coupled to the neural network processor, and is adapted to:receive an input data tensor having an input precision per channel at one or more features; transform the input data tensor from the input precision to the processor precision; divide the input data into a plurality of blocks, each block conforming to one of the processor feature dimensions; provide each of the plurality of blocks to one of the plurality of neural cores. The neural network processor is adapted to compute, by the plurality of neural cores,output of one or more neural network layers.
-
14.
公开(公告)号:GB2588719A
公开(公告)日:2021-05-05
申请号:GB202017726
申请日:2019-06-05
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , MYRON DALE FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , JOHN VERNON` ARTHUR , JENNIFER KLAMO , BRIAN SEISHO TABA , STEVEN KYLE ESSER , DHARMENDRA SHANTILAL MODHA
Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
-
-
-