-
公开(公告)号:GB2557780B
公开(公告)日:2022-02-09
申请号:GB201803975
申请日:2017-03-09
Applicant: IBM
Inventor: FILIPP AKOPYAN , RODRIGO ALVAREZ-ICAZA , JOHN VERNON` ARTHUR , ANDREW STEPHEN CASSIDY , STEVEN KYLE ESSER , BRYAN LAWRENCE JACKSON , PAUL MEROLLA , DHARMENDRA SHANTILAL MODHA , JUN SAWADA
IPC: G06N3/063
Abstract: A multiplexed neural core circuit according to one embodiment comprises, for an integer multiplexing factor T that is greater than zero, T sets of electronic neurons, T sets of electronic axons, where each set of the T sets of electronic axons corresponds to one of the T sets of electronic neurons, and a synaptic interconnection network comprising a plurality of electronic synapses that each interconnect a single electronic axon to a single electronic neuron, where the interconnection network interconnects each set of the T sets of electronic axons to its corresponding set of electronic neurons.
-
公开(公告)号:GB2586556A
公开(公告)日:2021-02-24
申请号:GB202018026
申请日:2019-03-28
Applicant: IBM
Inventor: DHARMENDRA SHANTILAL MODHA , JOHN VERNON` ARTHUR , JUN SAWADA , STEVEN KYLE ESSER , RATHINAKUMAR APPUSWAMY , BRIAN SEISHO TABA , ANDREW STEPHEN CASSIDY , PALLAB DATTA , MYRON DALE FLICKNER , HARTMUT PENNER , JENNIFER KLAMO
Abstract: Neural inference chips and cores adapted to provide time, space, and energy efficient neural inference via parallelism and on-chip memory are provided. In various embodiments, the neural inference chips comprise: a plurality of neural cores interconnected by an on-chip network; a first on-chip memory for storing a neural network model, the first on-chip memory being connected to each of the plurality of cores by the on-chip network; a second on-chip memory for storing input and output data, the second on-chip memory being connected to each of the plurality of cores by the on-chip network.
-
公开(公告)号:GB2585615A
公开(公告)日:2021-01-13
申请号:GB202016300
申请日:2019-03-11
Applicant: IBM
Inventor: JUN SAWADA , DHARMENDRA SHANTILAL MODHA , JOHN VERNON` ARTHUR , STEVEN KYLE ESSER , BRIAN SEISHO TABA , ANDREW STEPHEN CASSIDY , PALLAB DATTA , MYRON DALE FLICKNER , HARTMUT PENNER , JENNIFER KLAMO , RATHINAKUMAR APPUSWAMY
Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
-
公开(公告)号:GB2581904A
公开(公告)日:2020-09-02
申请号:GB202007034
申请日:2018-10-12
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , FILIPP AKOPYAN , JOHN VERNON` ARTHUR , DHARMENDRA SHANTILAL MODHA , PAUL MEROLLA , JUN SAWADA , MICHAEL VINCENT DEBOLE
IPC: H04L45/02
Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
-
公开(公告)号:GB2585615B
公开(公告)日:2021-05-19
申请号:GB202016300
申请日:2019-03-11
Applicant: IBM
Inventor: JUN SAWADA , DHARMENDRA SHANTILAL MODHA , JOHN VERNON` ARTHUR , STEVEN KYLE ESSER , BRIAN SEISHO TABA , ANDREW STEPHEN CASSIDY , PALLAB DATTA , MYRON DALE FLICKNER , HARTMUT PENNER , JENNIFER KLAMO , RATHINAKUMAR APPUSWAMY
Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
-
公开(公告)号:GB2569074A
公开(公告)日:2019-06-05
申请号:GB201904766
申请日:2017-09-26
Applicant: IBM
Inventor: DHARMENDRA SHANTILAL MODHA
IPC: G06N3/02
Abstract: A scalable stream synaptic supercomputer for extreme throughput neural networks is provided. The firing state of a plurality of neurons of a first neurosynaptic core is determined substantially in parallel. The firing state of the plurality of neurons is delivered to at least one additional neurosynaptic core substantially in parallel.
-
公开(公告)号:GB2553451A
公开(公告)日:2018-03-07
申请号:GB201716188
申请日:2016-01-22
Applicant: IBM
Inventor: ARNON AMIR , RATHINAKUMAR APPUSWAMY , PALLAB DATTA , BENJAMIN GORDON SHAW , MYRON DALE FLICKNER , PAUL MEROLLA , DHARMENDRA SHANTILAL MODHA
Abstract: One embodiment of the invention provides a system for mapping a neural network onto a neurosynaptic substrate. The system comprises a metadata analysis unit for analyzing metadata information associated with one or more portions of an adjacency matrix representation of the neural network, and a mapping unit for mapping the one or more portions of the matrix representation onto the neurosynaptic substrate based on the metadata information.
-
公开(公告)号:GB2581904B
公开(公告)日:2022-11-16
申请号:GB202007034
申请日:2018-10-12
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , FILIPP AKOPYAN , JOHN VERNON` ARTHUR , DHARMENDRA SHANTILAL MODHA , PAUL MEROLLA , JUN SAWADA , MICHAEL VINCENT DEBOLE
IPC: G06N3/063
Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.
-
公开(公告)号:GB2569074B
公开(公告)日:2022-03-23
申请号:GB201904766
申请日:2017-09-26
Applicant: IBM
Inventor: DHARMENDRA SHANTILAL MODHA
IPC: G06N3/02
Abstract: A scalable stream synaptic supercomputer for extreme throughput neural networks is provided. The firing state of a plurality of neurons of a first neurosynaptic core is determined substantially in parallel. The firing state of the plurality of neurons is delivered to at least one additional neurosynaptic core substantially in parallel.
-
公开(公告)号:GB2586763B
公开(公告)日:2021-08-11
申请号:GB202018196
申请日:2019-03-28
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , MYRON DALE FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , JOHN VERNON` ARTHUR , DHARMENDRA SHANTILAL MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO
IPC: G06N3/063
Abstract: Neural inference processors are provided. In various embodiments, a processor includes a plurality of cores. Each core includes a neural computation unit, an activation memory, and a local controller. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The activation memory is adapted to store the input activations and the output activations. The local controller is adapted to load the input activations from the activation memory to the neural computation unit and to store the plurality of output activations from the neural computation unit to the activation memory. The processor includes a neural network model memory adapted to store network parameters, including the plurality of synaptic weights. The processor includes a global scheduler operatively coupled to the plurality of cores, adapted to provide the synaptic weights from the neural network model memory to each core.
-
-
-
-
-
-
-
-
-