Stacked transistor bit-cell for magnetic random access memory

    公开(公告)号:US11411047B2

    公开(公告)日:2022-08-09

    申请号:US16128422

    申请日:2018-09-11

    Abstract: An apparatus is provided which comprises: a magnetic junction (e.g., a magnetic tunneling junction or spin valve). The apparatus further includes a structure (e.g., an interconnect) comprising spin orbit material, the structure adjacent to the magnetic junction; first and second transistors. The first transistor is coupled to a bit-line and a first word-line, wherein the first transistor is adjacent to the magnetic junction. The second transistor is coupled to a first select-line and a second word-line, wherein the second transistor is adjacent to the structure, wherein the interconnect is coupled to a second select-line, and wherein the magnetic junction is between the first and second transistors.

    Programmable interface to in-memory cache processor

    公开(公告)号:US11151046B2

    公开(公告)日:2021-10-19

    申请号:US16921685

    申请日:2020-07-06

    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of in-memory vector/tensor calculations in furtherance of neural network processing without burdening the processor circuitry.

    PROGRAMMABLE INTERFACE TO IN-MEMORY CACHE PROCESSOR

    公开(公告)号:US20200334161A1

    公开(公告)日:2020-10-22

    申请号:US16921685

    申请日:2020-07-06

    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution. Thus, the systems and methods described herein beneficially leverage the on-chip processor memory circuitry to perform a relatively large number of in-memory vector/tensor calculations in furtherance of neural network processing without burdening the processor circuitry.

Patent Agency Ranking