-
公开(公告)号:GB2590888B
公开(公告)日:2021-10-27
申请号:GB202106472
申请日:2019-09-25
Applicant: IBM
Inventor: JOHN VERNON` ARTHUR , ANDREW STEPHEN CASSIDY , MYRON FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , DHARMENDRA MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO
Abstract: Systems for neural network computation are provided. A neural network processor comprises a plurality of neural cores. The neural network processor has one or more processor precisions per activation. The processor is configured to accept data having a processor feature dimension. A transformation circuit is coupled to the neural network processor, and is adapted to: receive an input data tensor having an input precision per channel at one or more features; transform the input data tensor from the input precision to the processor precision; divide the input data into a plurality of blocks, each block conforming to one of the processor feature dimensions; provide each of the plurality of blocks to one of the plurality of neural cores. The neural network processor is adapted to compute, by the plurality of neural cores, output of one or more neural network layers.
-
公开(公告)号:GB2585615B
公开(公告)日:2021-05-19
申请号:GB202016300
申请日:2019-03-11
Applicant: IBM
Inventor: JUN SAWADA , DHARMENDRA SHANTILAL MODHA , JOHN VERNON` ARTHUR , STEVEN KYLE ESSER , BRIAN SEISHO TABA , ANDREW STEPHEN CASSIDY , PALLAB DATTA , MYRON DALE FLICKNER , HARTMUT PENNER , JENNIFER KLAMO , RATHINAKUMAR APPUSWAMY
Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
-
3.
公开(公告)号:GB2606600A
公开(公告)日:2022-11-16
申请号:GB202116839
申请日:2021-11-23
Applicant: IBM
Inventor: JUN SAWADA , MYRON D FLICKNER , ANDREW STEPHEN CASSIDY , JOHN VERNON ARTHUR , PALLAB DATTA , DHARMENDRA S MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO , RATHINAKUMAR APPUSWAMY , FILIPP AKOPYAN , CARLOS ORTEGA OTERO
Abstract: A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.
-
公开(公告)号:GB2557780B
公开(公告)日:2022-02-09
申请号:GB201803975
申请日:2017-03-09
Applicant: IBM
Inventor: FILIPP AKOPYAN , RODRIGO ALVAREZ-ICAZA , JOHN VERNON` ARTHUR , ANDREW STEPHEN CASSIDY , STEVEN KYLE ESSER , BRYAN LAWRENCE JACKSON , PAUL MEROLLA , DHARMENDRA SHANTILAL MODHA , JUN SAWADA
IPC: G06N3/063
Abstract: A multiplexed neural core circuit according to one embodiment comprises, for an integer multiplexing factor T that is greater than zero, T sets of electronic neurons, T sets of electronic axons, where each set of the T sets of electronic axons corresponds to one of the T sets of electronic neurons, and a synaptic interconnection network comprising a plurality of electronic synapses that each interconnect a single electronic axon to a single electronic neuron, where the interconnection network interconnects each set of the T sets of electronic axons to its corresponding set of electronic neurons.
-
公开(公告)号:GB2586556B
公开(公告)日:2021-08-11
申请号:GB202018026
申请日:2019-03-28
Applicant: IBM
Inventor: DHARMENDRA SHANTILAL MODHA , JOHN VERNON` ARTHUR , JUN SAWADA , STEVEN KYLE ESSER , RATHINAKUMAR APPUSWAMY , BRIAN SEISHO TABA , ANDREW STEPHEN CASSIDY , PALLAB DATTA , MYRON DALE FLICKNER , HARTMUT PENNER , JENNIFER KLAMO
Abstract: Neural inference chips and cores adapted to provide time, space, and energy efficient neural inference via parallelism and on-chip memory are provided. In various embodiments, the neural inference chips comprise: a plurality of neural cores interconnected by an on-chip network; a first on-chip memory for storing a neural network model, the first on-chip memory being connected to each of the plurality of cores by the on-chip network; a second on-chip memory for storing input and output data, the second on-chip memory being connected to each of the plurality of cores by the on-chip network.
-
公开(公告)号:GB2587175A
公开(公告)日:2021-03-17
申请号:GB202100512
申请日:2019-06-13
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , PALLAB DATTA , JENNIFER KLAMO , JUN SAWADA , RATHINAKUMAR APPUSWAMY , STEVEN KYLE ESSER , DHARMENDRA SHANTILAL MODHA , BRIAN SEISHO TABA , JOHN VERNON` ARTHUR , MYRON DALE FLICKNER , HARTMUT PENNER
Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
公开(公告)号:GB2586763A
公开(公告)日:2021-03-03
申请号:GB202018196
申请日:2019-03-28
Applicant: IBM
Inventor: ANDREW STEPHEN CASSIDY , MYRON DALE FLICKNER , PALLAB DATTA , HARTMUT PENNER , RATHINAKUMAR APPUSWAMY , JUN SAWADA , JOHN VERNON` ARTHUR , DHARMENDRA SHANTILAL MODHA , STEVEN KYLE ESSER , BRIAN SEISHO TABA , JENNIFER KLAMO
IPC: G06N3/063
Abstract: Neural inference processors are provided. In various embodiments, a processor includes a plurality of cores. Each core includes a neural computation unit, an activation memory, and a local controller. The neural computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The activation memory is adapted to store the input activations and the output activations. The local controller is adapted to load the input activations from the activation memory to the neural computation unit and to store the plurality of output activations from the neural computation unit to the activation memory. The processor includes a neural network model memory adapted to store network parameters, including the plurality of synaptic weights. The processor includes a global scheduler operatively coupled to the plurality of cores, adapted to provide the synaptic weights from the neural network model memory to each core.
-
公开(公告)号:GB2557780A
公开(公告)日:2018-06-27
申请号:GB201803975
申请日:2017-03-09
Applicant: IBM
Inventor: FILIPP AKOPYAN , RODRIGO ALVAREZ-ICAZA , JOHN VERNON` ARTHUR , ANDREW STEPHEN CASSIDY , STEVEN KYLE ESSER , BRYAN LAWRENCE JACKSON , PAUL MEROLLA , DHARMENDRA SHANTILAL MODHA , JUN SAWADA
IPC: G06N3/063
Abstract: A multiplexed neural core circuit (100) comprises, for an integer multiplexing factor T that is greater than zero, T sets of electronic neurons, T sets of electronic axons, where each set of the T sets of electronic axons corresponds to one of the T sets of electronic neurons, and a synaptic crossbar or interconnection network (110b) comprising a plurality of electronic synapses that each interconnects a single electronic axon to a single electronic neuron, where the synaptic crossbar or interconnection network (110b) interconnects each set of the T sets of electronic axons to its corresponding set of electronic neurons.
-
公开(公告)号:IL295718B1
公开(公告)日:2024-12-01
申请号:IL29571822
申请日:2022-08-17
Applicant: IBM , JUN SAWADA , DHARMENDRA S MODHA , ANDREW STEPHEN CASSIDY , JOHN VERNON ARTHUR , TAPAN KUMAR NAYAK , CARLOS ORTEGA OTERO , BRIAN TABA , FILIPP A AKOPYAN , PALLAB DATTA
Inventor: JUN SAWADA , DHARMENDRA S MODHA , ANDREW STEPHEN CASSIDY , JOHN VERNON ARTHUR , TAPAN KUMAR NAYAK , CARLOS ORTEGA OTERO , BRIAN TABA , FILIPP A AKOPYAN , PALLAB DATTA
Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.
-
公开(公告)号:GB2606596A
公开(公告)日:2022-11-16
申请号:GB202114616
申请日:2021-10-13
Applicant: IBM
Inventor: ARNON AMIR , ANDREW STEPHEN CASSIDY , NATHANIEL JOSEPH MCCLATCHEY , JUN SAWADA , DHARMENDRA S MODHA , RATHINAKUMAR APPUSWAMY
Abstract: Chips supporting constant time program control of nested loops are provided. In various embodiments, a chip comprises at least one arithmetic-logic computing unit and a controller operatively coupled to the at least one arithmetic-logic computing unit. The controller is configured according to a program configuration, the program configuration comprising at least one inner loop and at least one outer loop. The controller is configured to cause the at least one arithmetic computing unit to execute a plurality of operations according to the program configuration. The controller is configured to maintain at least a first loop counter and a second loop counter, the first loop counter configured to count a number of executed iterations of the at least one outer loop, and the second loop counter configured to count a number of executed iterations of the at least one inner loop. The controller is configured to provide a first indication of whether the first loop counter corresponds to a last iteration and a second indication of whether the second loop counter corresponds to a last iteration. The controller is configured to alternatively increment, reset, or maintain each of the first and second loop counters according to the first and second indications.
-
-
-
-
-
-
-
-
-