BINARY VECTOR FACTORIZATION
    2.
    发明申请

    公开(公告)号:US20180095935A1

    公开(公告)日:2018-04-05

    申请号:US15283373

    申请日:2016-10-01

    CPC classification number: G06F17/16

    Abstract: There is disclosed in an example, a processor, having: decode circuitry to decode instructions from an instruction stream; a data cache unit including circuitry to cache data for the processor; and a compute unit having an approximate matrix multiplication (AMM) circuit comprising: a data receptor to receive a weight vector w and an input vector x, both of size N, and a compression regulating parameter n; and a factorizor circuit to factorize w into w≅B·s, by computing a binary factorized matrix B of size N×n, and a dictionary vector s of size n. In an example, the factorization follows a dual minimization procedure, the time complexity of which is on average linear with N.

    Binary multiplier for binary vector factorization

    公开(公告)号:US10210137B2

    公开(公告)日:2019-02-19

    申请号:US15635716

    申请日:2017-06-28

    Abstract: A processor, including: decode circuitry to decode instructions; a data cache unit including circuitry to cache data for the processor; and an approximate matrix multiplication (AMM) circuit including: a data receptor circuit to receive a weight vector w and an input vector x, both of size N, and a compression regulating parameter n; a factorizer circuit to factorize w into w≅B·s, by computing a binary factorized matrix B of size N×n, and a dictionary vector s of size n; and a binary multiplier circuit to compute w^T x≅(B·s)^T x=s^T(B^T x), the binary multiplier circuit comprising a hardware accelerator circuit to compute an array product B^T x).

    Binary Multiplier for Binary Vector Factorization

    公开(公告)号:US20190004997A1

    公开(公告)日:2019-01-03

    申请号:US15635716

    申请日:2017-06-28

    Abstract: A processor, including: decode circuitry to decode instructions; a data cache unit including circuitry to cache data for the processor; and an approximate matrix multiplication (AMM) circuit including: a data receptor circuit to receive a weight vector w and an input vector x, both of size N, and a compression regulating parameter n; a factorizer circuit to factorize w into w≅B·s, by computing a binary factorized matrix B of size N×n, and a dictionary vector s of size n; and a binary multiplier circuit to compute w∧T x≅(B·s)∧T x=s∧T (B)∧T x), the binary multiplier circuit comprising a hardware accelerator circuit to compute an array product B∧T x).

    Binary vector factorization
    6.
    发明授权

    公开(公告)号:US10394930B2

    公开(公告)日:2019-08-27

    申请号:US15283373

    申请日:2016-10-01

    Abstract: There is disclosed in an example, a processor, having: decode circuitry to decode instructions from an instruction stream; a data cache unit including circuitry to cache data for the processor; and a compute unit having an approximate matrix multiplication (AMM) circuit comprising: a data receptor to receive a weight vector w and an input vector x, both of size N, and a compression regulating parameter n; and a factorizor circuit to factorize w into w≅B·s, by computing a binary factorized matrix B of size N×n, and a dictionary vector s of size n. In an example, the factorization follows a dual minimization procedure, the time complexity of which is on average linear with N.

    Image difference based segmentation using recursive neural networks

    公开(公告)号:US10148872B2

    公开(公告)日:2018-12-04

    申请号:US15426304

    申请日:2017-02-07

    Abstract: Techniques are provided for image segmentation based on image differencing, using recursive neural networks. A methodology implementing the techniques according to an embodiment includes quantizing pixels of a first image frame, performing a rigid translation of the quantized first image frame to generate a second image frame, and performing a differencing operation between the quantized first image frame and the second image frame to generate a sparse image frame. A neural network can then be applied to the sparse image frame to generate a segmented image. In still another embodiment, the methodology is applied to a sequence or set of image frames, for example from a video or still camera, and pixels from a first and second image frame of the sequence/set are quantized. The sparse image frame is generated from a difference between quantized image frames. The method further includes training the neural network on sparse training image frames.

Patent Agency Ranking