Caching of logical-to-physical mapping information in a memory sub-system

    公开(公告)号:US12130748B2

    公开(公告)日:2024-10-29

    申请号:US18225958

    申请日:2023-07-25

    Inventor: Sanjay Subbarao

    CPC classification number: G06F12/1009 G06F12/0875 G06F2212/608

    Abstract: A request that specifies a logical address associated with a host-initiated operation directed at a first portion of a memory device is received. A logical to physical (L2P) table is accessed. The L2P table comprises a mapping between logical addresses and physical addresses in a second portion of the memory device. An entry in the L2P table that corresponds to the logical address is identified and is determined to point to an entry in a read cache table. Based on an entry number of the entry in the read cache table, a chunk address of a chunk from among multiple chunks of a read cache is calculated. A physical address that corresponds to the logical address specified by the request is identified by accessing the chunk of read cache. The host-initiated operation is performed at a physical location within the first portion of the memory device corresponding the physical address.

    BLOCK FAILURE PROTECTION FOR ZONE MEMORY SYSTEM

    公开(公告)号:US20240152424A1

    公开(公告)日:2024-05-09

    申请号:US18406894

    申请日:2024-01-08

    Inventor: Sanjay Subbarao

    CPC classification number: G06F11/1068 G06F11/0793 G06F11/1435

    Abstract: Various embodiments provide block failure protection for a memory sub-system that supports zones, such a memory sub-system that uses a RAIN (redundant array of independent NAND-type flash memory devices) technique for data error-correction. For some embodiments, non-parity zones of a memory sub-system that are filling up at a similar rate are matched together, a parity is generated for stored data from across the matching zones, and the generated parity is stored in a parity zone of the memory device.

    Logical-to-physical mapping of data groups with data locality

    公开(公告)号:US11640354B2

    公开(公告)日:2023-05-02

    申请号:US17572477

    申请日:2022-01-10

    Abstract: A system includes integrated circuit (IC) dies having memory cells and a processing device, which is to perform operations including generating a number of zone map entries for zones of a logical block address (LBA) space that are sequentially mapped to physical address space of the plurality of IC dies, wherein each zone map entry corresponds to a respective data group that has been sequentially written to one or more IC dies; and generating a die identifier and a block identifier for each data block of multiple data blocks of the respective data group, wherein each data block corresponds to a media block of the plurality of IC dies.

    EXTENDING SIZE OF MEMORY UNIT
    6.
    发明申请

    公开(公告)号:US20220261173A1

    公开(公告)日:2022-08-18

    申请号:US17179059

    申请日:2021-02-18

    Inventor: Sanjay Subbarao

    Abstract: Various embodiments described herein provide for extending a size of a memory unit of a memory device, such as a codeword of a page of the memory device, where the memory device can be included by a memory system. In particular, some embodiments implement extending (e.g., increasing) the size of a memory unit (e.g., codeword) to store more data, such as more host data (e.g., user data) and protection data (e.g., parity data), within the memory unit while using a memory unit storage slot (e.g., codeword storage slot in a page) that is smaller in size than the extended memory unit.

    QOS TRAFFIC CLASS LATENCY MODEL FOR JUST-IN-TIME (JIT) SCHEDULERS

    公开(公告)号:US20220197563A1

    公开(公告)日:2022-06-23

    申请号:US17407396

    申请日:2021-08-20

    Abstract: The memory sub-systems of the present disclosure discloses a simulator to simulate a QoS latency model for a just-in-time (JIT) scheduler. In one embodiment, a system receives a workload profile specifying a sequence of memory operations, wherein each memory operation is associated with a type of the memory operation. The system identifies a traffic class associated with each memory operation of the sequence of memory operations. The system queues each memory operation of the sequence of memory operations, based on the traffic class associated with the memory operation, in a scheduling pool of a number of scheduling pools. The system selects, based on a quality of service (QoS) policy, from the scheduling pools, one or more memory operations to be serviced within a scheduling time frame. The system determines, based on a latency profile, latency periods for each memory operation of the one or more memory operations.

    Multi-Pass Data Programming in a Memory Sub-System having Multiple Dies and Planes

    公开(公告)号:US20220171574A1

    公开(公告)日:2022-06-02

    申请号:US17675888

    申请日:2022-02-18

    Abstract: A memory sub-system having memory cells formed on a plurality of integrated circuit dies. After receiving a command from a host system to store data, the memory sub-system queues the command to allocate pages of memory cells in a plurality of dies in the plurality of integrated circuit dies based on a determination that each of the plurality of dies is available to perform a data programming operation for the command. Based on the page application, the memory sub-system generates a portion of a media layout to at least map logical addresses of the data identified in the command to the allocated pages and receives the data from the host system. The memory sub-system stores the data into the pages using a multi-pass programming technique, where an atomic multi-pass programming operation can be configured to use at least two pages in separate planes in one or more dies in the plurality of integrated circuit dies to program at least a portion of the data.

    Multi-pass data programming in a memory sub-system having multiple dies and planes

    公开(公告)号:US11269552B2

    公开(公告)日:2022-03-08

    申请号:US16866326

    申请日:2020-05-04

    Abstract: A memory sub-system having memory cells formed on a plurality of integrated circuit dies. After receiving a command from a host system to store data, the memory sub-system queues the command to allocate pages of memory cells in a plurality of dies in the plurality of integrated circuit dies based on a determination that each of the plurality of dies is available to perform a data programming operation for the command. Based on the page application, the memory sub-system generates a portion of a media layout to at least map logical addresses of the data identified in the command to the allocated pages and receives the data from the host system. The memory sub-system stores the data into the pages using a multi-pass programming technique, where an atomic multi-pass programming operation can be configured to use at least two pages in separate planes in one or more dies in the plurality of integrated circuit dies to program at least a portion of the data.

    Two-layer code with low parity cost for memory sub-systems

    公开(公告)号:US11164652B2

    公开(公告)日:2021-11-02

    申请号:US16883839

    申请日:2020-05-26

    Abstract: A memory sub-system configured to encode data using an error correcting code and an erasure code for storing data into memory cells and to decode data retrieved from the memory cells. For example, the data units of a predetermined size are separately encoded using the error correcting code (e.g., a low-density parity-check (LDPC) code) to generate parity data of a first layer. Symbols within the data units are cross encoded using the erasure code. Parity symbols of a second layer are calculated according to the erasure code. A collection of parity symbols having a total size equal to the predetermined size can be further encoded using the error correcting code to generate parity data for the parity symbols.

Patent Agency Ranking