Resource allocation by virtual channel management and bus multiplexing
    61.
    发明授权
    Resource allocation by virtual channel management and bus multiplexing 有权
    通过虚拟通道管理和总线复用进行资源分配

    公开(公告)号:US09471522B2

    公开(公告)日:2016-10-18

    申请号:US14096574

    申请日:2013-12-04

    Abstract: According to embodiments of the invention, methods, computer system, and apparatus for virtual channel management and bus multiplexing are disclosed. The method may include establishing a virtual channel from a first device to a second device via a bus, the bus having a first bus capacity and a second bus capacity, the second bus capacity having greater capacity than the first bus capacity, determining whether a store command is issued for the first bus capacity, determining whether the first bus capacity is available, and allocating the second bus capacity and marking the second bus capacity as unavailable in response to the store command if the first bus capacity is unavailable.

    Abstract translation: 根据本发明的实施例,公开了用于虚拟信道管理和总线复用的方法,计算机系统和装置。 该方法可以包括经由总线建立从第一设备到第二设备的虚拟通道,总线具有第一总线容量和第二总线容量,第二总线容量具有比第一总线容量更大的容量,确定存储 发出第一总线容量的命令,确定第一总线容量是否可用,并且如果第一总线容量不可用,则分配第二总线容量并且响应于存储命令将第二总线容量标记为不可用。

    Implicit I/O send on cache operations
    62.
    发明授权
    Implicit I/O send on cache operations 有权
    隐式I / O发送缓存操作

    公开(公告)号:US09367460B2

    公开(公告)日:2016-06-14

    申请号:US14310163

    申请日:2014-06-20

    Abstract: A computer system for implicit input-output send on cache operations of a central processing unit is provided. The computer system comprises an aggregation queue of a central processing unit, storing input-output data of the central processing unit, wherein the aggregation queue transmits the input-output data to an input-output adaptor, and wherein the input-output data is transmitted in parallel with operations of the central processing unit. The computer system further comprises, a memory management unit of the central processing unit, interpreting address space descriptors for implicit input-output transmittal of the input-output data of the aggregation queue. The computer system further comprises, a cache traffic monitor of the central processing unit, transmitting the input-output data in an implicit input-output transmittal range between the cache traffic monitor and the aggregation queue, wherein the cache traffic monitor transmits cache protocol of the central processing unit to the memory management unit.

    Abstract translation: 提供了一种用于中央处理单元的高速缓存操作的隐式输入输出计算机系统。 计算机系统包括中央处理单元的聚集队列,存储中央处理单元的输入 - 输出数据,其中聚合队列将输入 - 输出数据传送到输入 - 输出适配器,并且其中传输输入 - 输出数据 与中央处理单元的操作并行。 计算机系统还包括中央处理单元的存储器管理单元,解释用于汇聚队列的输入输出数据的隐式输入输出传送的地址空间描述符。 所述计算机系统还包括:所述中央处理单元的高速缓存流量监视器,在所述高速缓存流量监视器和所述聚合队列之间的隐式输入输出传送范围内传输所述输入输出数据,其中所述高速缓存流量监视器发送所述高速缓存流量监视器 中央处理单元到存储器管理单元。

    Technique to handle insufficient on-chip memory capacity in decompressors

    公开(公告)号:US12254178B2

    公开(公告)日:2025-03-18

    申请号:US18217480

    申请日:2023-06-30

    Abstract: A method to handle insufficient on-chip memory capacity in decompressors is disclosed. In one embodiment, such a method includes executing, by a decompressor configured to decompress data, an instruction configured to copy data from a source position within a data stream to a destination position within the data stream. The method determines whether the source position currently resides within an on-chip buffer of the decompressor. In the event the source position does not currently reside within the on-chip buffer of the decompressor, the method writes arbitrary placeholder data to the destination position and adds the instruction to a patch buffer. At a later point in time, the method retrieves the instruction from the patch buffer and executes the instruction by retrieving the data from the source position and overwriting the arbitrary placeholder data at the destination position with the data. A corresponding system and computer program product are also disclosed.

    Record-based matching in data compression

    公开(公告)号:US11188503B2

    公开(公告)日:2021-11-30

    申请号:US16793113

    申请日:2020-02-18

    Abstract: Compression of data is facilitated by locating matches within the data to be compressed. A first technique is used to determine whether there is at least one matching string in the data to be compressed, and a second technique, different from the first technique, is used to determine whether there is at least one matching record in the data to be compressed. Based on there being at least one matching string in the data to be compressed, at least one indication of the at least one matching string is provided to an encoder to facilitate compression of the data. Further, based on there being at least one matching record in the data to be compressed, at least one indication of the at least one matching record is provided to the encoder to facilitate compression of the data. It is transparent to the encoder whether the first technique or the second technique is used to provide one or more matches.

    EFFICIENT GENERATION OF INSTRUMENTATION DATA FOR DIRECT MEMORY ACCESS OPERATIONS

    公开(公告)号:US20210216430A1

    公开(公告)日:2021-07-15

    申请号:US16738311

    申请日:2020-01-09

    Abstract: Aspects of the invention include efficient generation of instrumentation data for direct memory access operations. A non-limiting example apparatus includes an instrumentation component, residing in a cache in communication with a plurality of processing units, an accelerator, and a plurality of input output interfaces. The cache includes a direct memory access monitor that receives events from the accelerator its respective I/O interface and stores DMA state and latency for each event. The cache also includes a bucket including a DMA counter and a latency counter in communication with the DMA monitor, wherein the bucket stores in the DMA counter a count of DMAs coming from a source and stores in the latency counter the latency measured for each DMA coming from the source.

Patent Agency Ranking