-
11.
公开(公告)号:GB2588719A
公开(公告)日:2021-05-05
申请号:GB202017726
申请日:2019-06-05
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , MYRON DALE FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , JOHN VERNON` ARTHUR , JENNIFER KLAMO , BRIAN SEISHO TABA , STEVEN KYLE ESSER , DHARMENDRA SHANTILAL MODHA
Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
-
公开(公告)号:GB2586556B
公开(公告)日:2021-08-11
申请号:GB202018026
申请日:2019-03-28
Applicant: IBM
Inventor: DHARMENDRA SHANTILAL MODHA , JOHN VERNON` ARTHUR , JUN SAWADA , STEVEN KYLE ESSER , RATHINAKUMAR APPUSWAMY , BRIAN SEISHO TABA , ANDREW STEPHEN CASSIDY , PALLAB DATTA , MYRON DALE FLICKNER , HARTMUT PENNER , JENNIFER KLAMO
Abstract: Neural inference chips and cores adapted to provide time, space, and energy efficient neural inference via parallelism and on-chip memory are provided. In various embodiments, the neural inference chips comprise: a plurality of neural cores interconnected by an on-chip network; a first on-chip memory for storing a neural network model, the first on-chip memory being connected to each of the plurality of cores by the on-chip network; a second on-chip memory for storing input and output data, the second on-chip memory being connected to each of the plurality of cores by the on-chip network.
-
公开(公告)号:GB2587175A
公开(公告)日:2021-03-17
申请号:GB202100512
申请日:2019-06-13
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , PALLAB DATTA , JENNIFER KLAMO , JUN SAWADA , RATHINAKUMAR APPUSWAMY , STEVEN KYLE ESSER , DHARMENDRA SHANTILAL MODHA , BRIAN SEISHO TABA , JOHN VERNON` ARTHUR , MYRON DALE FLICKNER , HARTMUT PENNER
Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
公开(公告)号:GB2586763A
公开(公告)日:2021-03-03
申请号:GB202018196
申请日:2019-03-28
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , MYRON DALE FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , JOHN VERNON` ARTHUR , DHARMENDRA SHANTILAL MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO
IPC: G06N3/063
Abstract: Neural inference processors are provided. In various embodiments, a processor includes a plurality of cores. Each core includes a neural computation unit, an activation memory, and a local controller. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The activation memory is adapted to store the input activations and the output activations. The local controller is adapted to load the input activations from the activation memory to the neural computation unit and to store the plurality of output activations from the neural computation unit to the activation memory. The processor includes a neural network model memory adapted to store network parameters, including the plurality of synaptic weights. The processor includes a global scheduler operatively coupled to the plurality of cores, adapted to provide the synaptic weights from the neural network model memory to each core.
-
公开(公告)号:GB2557780A
公开(公告)日:2018-06-27
申请号:GB201803975
申请日:2017-03-09
Applicant: IBM
Inventor: FILIPP AKOPYAN , RODRIGO ALVAREZ-ICAZA , JOHN VERNON` ARTHUR , ANDREW STEPHEN CASSIDY , STEVEN KYLE ESSER , BRYAN LAWRENCE JACKSON , PAUL MEROLLA , DHARMENDRA SHANTILAL MODHA , JUN SAWADA
IPC: G06N3/063
Abstract: A multiplexed neural core circuit (100) comprises, for an integer multiplexing factor T that is greater than zero, T sets of electronic neurons, T sets of electronic axons, where each set of the T sets of electronic axons corresponds to one of the T sets of electronic neurons, and a synaptic crossbar or interconnection network (110b) comprising a plurality of electronic synapses that each interconnects a single electronic axon to a single electronic neuron, where the synaptic crossbar or interconnection network (110b) interconnects each set of the T sets of electronic axons to its corresponding set of electronic neurons.
-
公开(公告)号:GB2497008B
公开(公告)日:2016-10-26
申请号:GB201300773
申请日:2011-04-28
Applicant: IBM
Inventor: ANTHONY NDIRANGO , DHARMENDRA SHANTILAL MODHA , STEVE KYLE ESSER
Abstract: Embodiments of the invention relate to canonical spiking neurons for spatiotemporal associative memory. An aspect of the invention provides a spatiotemporal associative memory including a plurality of electronic neurons having a layered neural net relationship with directional synaptic connectivity. The plurality of electronic neurons configured to detect the presence of a spatiotemporal pattern in a real-time data stream, and extract the spatiotemporal pattern. The plurality of electronic neurons are further configured to, based on learning rules, store the spatiotemporal pattern in the plurality of electronic neurons, and upon being presented with a version of the spatiotemporal pattern, retrieve the stored spatiotemporal pattern.
-
-
-
-
-
-