MACHINE LEARNING ARCHITECTURE SUPPORT FOR BLOCK SPARSITY

    公开(公告)号:US20220004597A1

    公开(公告)日:2022-01-06

    申请号:US17481064

    申请日:2021-09-21

    Inventor: Omid Azizi

    Abstract: This disclosure relates matrix operation acceleration for different matrix sparsity patterns. A matrix operation accelerator may be designed to perform matrix operations more efficiently for a first matrix sparsity pattern rather than for a second matrix sparsity pattern. A matrix with the second sparsity pattern may be converted to a matrix with the first sparsity pattern and provided to the matrix operation accelerator. By rearranging the rows and/or columns of the matrix, the sparsity pattern of the matrix may be converted to a sparsity pattern that is suitable for computation with the matrix operation accelerator.

    Machine learning architecture support for block sparsity

    公开(公告)号:US11126690B2

    公开(公告)日:2021-09-21

    申请号:US16370094

    申请日:2019-03-29

    Inventor: Omid Azizi

    Abstract: This disclosure relates matrix operation acceleration for different matrix sparsity patterns. A matrix operation accelerator may be designed to perform matrix operations more efficiently for a first matrix sparsity pattern rather than for a second matrix sparsity pattern. A matrix with the second sparsity pattern may be converted to a matrix with the first sparsity pattern and provided to the matrix operation accelerator. By rearranging the rows and/or columns of the matrix, the sparsity pattern of the matrix may be converted to a sparsity pattern that is suitable for computation with the matrix operation accelerator.

    Physical page tracking for handling overcommitted memory

    公开(公告)号:US10733108B2

    公开(公告)日:2020-08-04

    申请号:US15980523

    申请日:2018-05-15

    Abstract: A system for computer memory management that implements a memory pool table, the memory pool table including entries that describe a plurality of memory pools, each memory pool representing a group of memory pages related by common attributes; a per-page tracking table, each entry in the per-page tracking table used to related a memory page with a memory pool of the memory pool table; and processing circuitry to: scan each entry in the per-page tracking table and, for each entry: determine an amount of memory released if the memory page related with the entry is swapped; aggregate the amount of memory for the respective memory pool related with the memory page related with the entry in the per-page tracking table, to produce a per-pool memory aggregate; and output the per-pool memory aggregate for the memory pools related with the memory pages in the per-page tracking table.

Patent Agency Ranking