Container-first architecture
    2.
    发明授权

    公开(公告)号:US12032977B2

    公开(公告)日:2024-07-09

    申请号:US17440701

    申请日:2020-05-11

    Abstract: In one embodiment, a computing device comprises memory circuitry and processing circuitry. The memory circuitry is to store a plurality of container images, comprising: a first container image comprising a first set of applications; and a second container image comprising a virtual machine, a guest operating system, and a second set of applications. The processing circuitry is to: instantiate a plurality of containers on a host operating system, wherein the plurality of containers comprises a first container and a second container; execute the first set of applications in the first container, wherein the first set of applications is to be executed on the host operating system; and execute the virtual machine in the second container, wherein the guest operating system is to be executed on the virtual machine and the second set of applications is to be executed on the guest operating system.

    BIT MATRIX MULTIPLICATION
    4.
    发明公开

    公开(公告)号:US20230195835A1

    公开(公告)日:2023-06-22

    申请号:US18083012

    申请日:2022-12-16

    Abstract: Detailed are embodiments related to bit matrix multiplication in a processor. For example, in some embodiments a processor comprising: decode circuitry to decode an instruction have fields for an opcode, an identifier of a first source bit matrix, an identifier of a second source bit matrix, an identifier of a destination bit matrix, and an immediate; and execution circuitry to execute the decoded instruction to perform a multiplication of a matrix of S-bit elements of the identified first source bit matrix with S-bit elements of the identified second source bit matrix, wherein the multiplication and accumulation operations are selected by the operation selector and store a result of the matrix multiplication into the identified destination bit matrix, wherein S indicates a plural bit size is described.

    Efficient and secure sharing of large data repositories

    公开(公告)号:US11604889B2

    公开(公告)日:2023-03-14

    申请号:US15777721

    申请日:2015-12-22

    Abstract: Systems, apparatuses and methods may provide for a memory apparatus that includes a client-side address space dedicated to an accessor of obfuscated multi-tenant data, wherein an executable view generation library is stored to the client-side address space. In one example, the executable view generation library is to receive a request to access at least a portion of the obfuscated multi-tenant data, convert the obfuscated multi-tenant data to deobfuscated multi-tenant data based on metadata associated with the executable view generation library and generate a single-tenant view based on the deobfuscated multi-tenant data.

    TECHNOLOGIES FOR DYNAMICALLY SHARING REMOTE RESOURCES ACROSS REMOTE COMPUTING NODES

    公开(公告)号:US20230047886A1

    公开(公告)日:2023-02-16

    申请号:US17978788

    申请日:2022-11-01

    Abstract: Technologies for dynamically sharing remote resources include a computing node that sends a resource request for remote resources to a remote computing node in response to a determination that additional resources are required by the computing node. The computing node configures a mapping of a local address space of the computing node to the remote resources of the remote computing node in response to sending the resource request. In response to generating an access to the local address, the computing node identifies the remote computing node based on the local address with the mapping of the local address space to the remote resources of the remote computing node and performs a resource access operation with the remote computing node over a network fabric. The remote computing node may be identified with system address decoders of a caching agent and a host fabric interface. Other embodiments are described and claimed.

    System decoder for training accelerators

    公开(公告)号:US11269801B2

    公开(公告)日:2022-03-08

    申请号:US17125439

    申请日:2020-12-17

    Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.

    Efficient zero-based decompression
    10.
    发明授权

    公开(公告)号:US10540177B2

    公开(公告)日:2020-01-21

    申请号:US15438712

    申请日:2017-02-21

    Abstract: A processor core including a hardware decode unit to decode vector instructions for decompressing a run length encoded (RLE) set of source data elements and an execution unit to execute the decoded instructions. The execution unit generates a first mask by comparing set of source data elements with a set of zeros and then counts the trailing zeros in the mask. A second mask is made based on the count of trailing zeros. The execution unit then copies the set of source data elements to a buffer using the second mask and then reads the number of RLE zeros from the set of source data elements. The buffer is shifted and copied to a result and the set of source data elements is shifted to the right. If more valid data elements are in the set of source data elements this is repeated until all valid data is processed.

Patent Agency Ranking