BANDWIDTH/RESOURCE MANAGEMENT FOR MULTITHREADED PROCESSORS
    2.
    发明申请
    BANDWIDTH/RESOURCE MANAGEMENT FOR MULTITHREADED PROCESSORS 审中-公开
    多重处理器的宽带/资源管理

    公开(公告)号:WO2016195851A1

    公开(公告)日:2016-12-08

    申请号:PCT/US2016/029530

    申请日:2016-04-27

    Abstract: Systems and methods relate to managing shared resources in a multithreaded processor comprising two or more processing threads. Danger levels for the two or more threads are determined, wherein the danger level of a thread is based on a potential failure of the thread to meet a deadline due to unavailability of a shared resource. Priority levels associated with the two or more threads are also determined, wherein the priority level is higher for a thread whose failure to meet a deadline is unacceptable and the priority level is lower for a thread whose failure to meet a deadline is acceptable. The two or more threads are scheduled based at least on the determined danger levels for the two or more threads and priority levels associated with the two or more threads.

    Abstract translation: 系统和方法涉及在包括两个或多个处理线程的多线程处理器中管理共享资源。 确定两个或更多个线程的危险水平,其中线程的危险等级基于线程由于不可用的共享资源而遇到期限的潜在故障。 还确定与两个或更多个线程相关联的优先级,其中对于不能达到期限的线程而言,优先级高于不能接受的线程,并且对于不满足截止期限的线程,优先级较低。 至少基于与两个或多个线程相关联的两个或多个线程的确定的危险等级和优先级,来调度两个或更多个线程。

    PROCESS SCHEDULING TO IMPROVE VICTIM CACHE MODE
    3.
    发明申请
    PROCESS SCHEDULING TO IMPROVE VICTIM CACHE MODE 审中-公开
    过程调度以改善VICTIM CACHE模式

    公开(公告)号:WO2016133598A1

    公开(公告)日:2016-08-25

    申请号:PCT/US2016/012023

    申请日:2016-01-04

    Abstract: Aspects include computing devices, systems, and methods for implementing scheduling an execution process to an execution processor cluster to take advantage of reduced latency with a victim cache. The computing device may determine a first processor cluster with a first remote shared cache memory having an available shared cache memory space. To properly schedule the execution process, the computing device may determine a second processor cluster with a lower latency to the first remote shared cache memory than an execution processor cluster scheduled with the execution process. The second processor cluster may be scheduled the execution process, thus becoming the execution processor cluster, based on a size of the available shared cache memory space and the latency of the second processor cluster to the first remote shared cache memory. The available shared cache memory space may be used as the victim cache for the execution process.

    Abstract translation: 方面包括用于实现将执行过程调度到执行处理器集群以利用受害缓存的减少的延迟的计算设备,系统和方法。 计算设备可以使用具有可用共享高速缓冲存储器空间的第一远程共享高速缓冲存储器来确定第一处理器群集。 为了适当地安排执行过程,计算设备可以确定具有比执行处理调度的执行处理器群具有比第一远程共享高速缓冲存储器更低延迟的第二处理器群集。 可以基于可用共享高速缓冲存储器空间的大小和第二处理器群集到第一远程共享高速缓冲存储器的延迟来调度第二处理器群集,从而成为执行处理器群集。 可用的共享缓存存储器空间可以用作执行过程的受害缓存。

    CACHE BANK SPREADING FOR COMPRESSION ALGORITHMS
    4.
    发明申请
    CACHE BANK SPREADING FOR COMPRESSION ALGORITHMS 审中-公开
    用于压缩算法的CACHE BANK扩展

    公开(公告)号:WO2016039866A1

    公开(公告)日:2016-03-17

    申请号:PCT/US2015/041781

    申请日:2015-07-23

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for compressed data using cache bank spreading. In an aspect, cache bank spreading may include determining whether the compressed data of the cache memory access fits on a single cache bank. In response to determining that the compressed data fits on a single cache bank, a cache bank spreading value may be calculated to replace/reinstate bank selection bits of the physical address for a cache memory of the cache memory access request that may be cleared during data compression. A cache bank spreading address in the physical space of the cache memory may include the physical address of the cache memory access request plus the reinstated bank selection bits. The cache bank spreading address may be used to read compressed data from or write compressed data to the cache memory device.

    Abstract translation: 方面包括计算设备,系统和方法,用于使用高速缓存存储体扩展来实现用于压缩数据的高速缓存存储器访问请求。 在一方面,高速缓存存储体扩展可以包括确定高速缓冲存储器访问的压缩数据是否适合于单个高速缓存存储体。 响应于确定压缩数据适合于单个高速缓存存储体,可以计算高速缓存存储体扩展值以代替/恢复可以在数据期间清除的高速缓冲存储器访问请求的高速缓冲存储器的物理地址的存储体选择位 压缩。 高速缓冲存储器的物理空间中的高速缓存存储体扩展地址可以包括高速缓冲存储器访问请求的物理地址加上恢复的存储体选择位。 缓存存储体扩展地址可用于从压缩数据读取压缩数据或将压缩数据写入缓存存储器件。

    METHODS OF SELECTING AVAILABLE CACHE IN MULTIPLE CLUSTER SYSTEM
    6.
    发明申请
    METHODS OF SELECTING AVAILABLE CACHE IN MULTIPLE CLUSTER SYSTEM 审中-公开
    在多个集群系统中选择可用缓存的方法

    公开(公告)号:WO2016130204A1

    公开(公告)日:2016-08-18

    申请号:PCT/US2015/064895

    申请日:2015-12-10

    Abstract: Aspects include computing devices, systems, and methods for implementing selecting an available shared cache memory as a victim cache. The computing device may identify a remote shared cache memory with available shared cache memory space for use as the victim cache. To select the appropriate available shared cache memory, the computing device may retrieve data for the identified remote shared cache memory or a processor cluster associated with the identified remote shared cache memory relating to a metric, such as performance speed, efficiency, or effective victim cache size. Using the retrieved data, the computing device may determine the identified remote shared cache memory to use as the victim cache and select the determined remote shared cache memory to use as the victim cache.

    Abstract translation: 方面包括用于实现选择可用的共享高速缓冲存储器作为受害者高速缓存的计算设备,系统和方法。 计算设备可以使用可用的共享高速缓冲存储器空间来识别远程共享高速缓冲存储器,以用作受害者高速缓存。 为了选择适当的可用共享高速缓冲存储器,计算设备可以检索与所标识的远程共享高速缓冲存储器相关联的识别的远程共享高速缓冲存储器的数据或与所标识的远程共享高速缓冲存储器有关的诸如性能速度, 尺寸。 使用检索的数据,计算设备可以确定所识别的远程共享高速缓冲存储器以用作受害者高速缓存,并选择所确定的远程共享高速缓冲存储器以用作受害者高速缓存。

    POWER AWARE PADDING
    8.
    发明申请
    POWER AWARE PADDING 审中-公开
    电源吊装

    公开(公告)号:WO2016028440A1

    公开(公告)日:2016-02-25

    申请号:PCT/US2015/042095

    申请日:2015-07-24

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by combining the data with padding data of a size of a difference between a size of a cache line and the data. A processor may determine whether the data, uncompressed or compressed, is smaller than a cache line using a size of the data or a compression ratio of the data. The processor may generate the padding data using constant data values or a pattern of data values. The processor may send a write cache memory access request for the combined data to a cache memory controller, which may write the combined data to a cache memory. The cache memory controller may send a write memory access request to a memory controller, which may write the combined data to a memory.

    Abstract translation: 方面包括计算设备,系统和方法,用于对小于高速缓存线的数据实现高速缓冲存储器访问请求,并且通过将数据与高速缓存行的大小之间的差大小的填充数据组合来消除从主存储器的超时 和数据。 处理器可以使用数据的大小或数据的压缩比来确定未压缩或压缩的数据是否小于高速缓存行。 处理器可以使用恒定的数据值或数据值的模式来生成填充数据。 处理器可以将组合数据的写高速缓存存储器访问请求发送到高速缓冲存储器控制器,高速缓冲存储器控制器可以将组合的数据写入高速缓冲存储器。 高速缓冲存储器控制器可以将写存储器访问请求发送到存储器控制器,存储器控制器可以将组合的数据写入存储器。

    SUPPLEMENTAL WRITE CACHE COMMAND FOR BANDWIDTH COMPRESSION
    9.
    发明公开
    SUPPLEMENTAL WRITE CACHE COMMAND FOR BANDWIDTH COMPRESSION 审中-公开
    用于带宽压缩的补充写入缓存命令

    公开(公告)号:EP3183658A1

    公开(公告)日:2017-06-28

    申请号:EP15747334.9

    申请日:2015-07-24

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by writing supplemental data to the unfilled portions of the cache line. A cache memory controller may receive a cache memory access request with a supplemental write command for data smaller than a cache line. The cache memory controller may write supplemental to the portions of the cache line not filled by the data in response to a write cache memory access request or a cache miss during a read cache memory access request. In the event of a cache miss, the cache memory controller may retrieve the data from the main memory, excluding any overfetch data, and write the data and the supplemental data to the cache line. Eliminating overfetching reduces bandwidth and power required to retrieved data from main memory.

    Abstract translation: 各方面包括用于实现针对小于缓存线的数据的缓存存储器访问请求的计算设备,系统和方法,以及通过将补充数据写入到缓存线的未填充部分来消除从主存储器的过度取消。 高速缓存存储器控制器可以接收高速缓存存储器访问请求,其中补充写入命令用于比高速缓存行小的数据。 高速缓存存储器控制器可以响应于写入高速缓存存取访问请求或读取高速缓冲存储器访问请求期间的高速缓存未命中而向未由数据填充的高速缓存行的部分写入补充。 在高速缓存未命中的情况下,高速缓存存储器控制器可以从主存储器检索数据,排除任何超取数据,并将数据和补充数据写入高速缓存行。 消除过度取消减少了从主存储器中检索数据所需的带宽和功耗。

Patent Agency Ranking