SNOOP FILTER FOR LARGE CACHE USING HASH TECHNIQUE WITH OPTIMAL REFRESH ALGORITHM

    公开(公告)号:US20240202124A1

    公开(公告)日:2024-06-20

    申请号:US18067779

    申请日:2022-12-19

    CPC classification number: G06F12/0831 G06F12/0864

    Abstract: Embodiments described herein may include apparatus, systems, techniques, and/or processes that are directed to computing systems implementing a very large cache for one or more processing engines in a shared memory system. According to various embodiments, a snoop filter tracks a hash value of the cached addresses instead of tracking the addresses themselves. Tracking hash values introduces inaccuracy and an inability to easily clean or refresh the snoop filter. A refresh algorithm maintains cache coherency without significant performance degradation. The cache refresh algorithm keeps the accuracy of the snoop filter, hence reducing the latency and power effects of false snoops. Further, the use of hash values reduces the hardware cost over traditional snoop filters.

    INTEGRATED CIRCUIT CHIP TO SELECTIVELY PROVIDE TAG ARRAY FUNCTIONALITY OR CACHE ARRAY FUNCTIONALITY

    公开(公告)号:US20240202120A1

    公开(公告)日:2024-06-20

    申请号:US18083389

    申请日:2022-12-16

    CPC classification number: G06F12/0806 G06F2212/1016

    Abstract: Techniques and mechanisms for selectively configuring an integrated circuit (IC) chip to provide tag array functionality and/or cache array functionality. In an embodiment, an IC chip comprises a first array of memory cells, a second array of memory cells, and a cache controller. Based on whether the IC chip is coupled to another IC chip, selector circuitry of the IC chip configures one of multiple possible modes of the cache controller. A first mode of the multiple modes is to provide tag array functionality with the first array, and cache array functionality with the second memory cell array. A second mode of the multiple modes is to provide tag array functionality with the second memory cell array, and cache array functionality with a remote array of memory cells. In another embodiment, the cache controller is reconfigured to another mode based on a change to a power consumption characteristic.

    Transmit byte enable information over a data bus

    公开(公告)号:US10558602B1

    公开(公告)日:2020-02-11

    申请号:US16130748

    申请日:2018-09-13

    Inventor: Israel Diamand

    Abstract: A transmitter comprising an input data buffer to store a plurality of bytes received on a first interconnect; multiplexer circuitry coupled to the input data buffer; and an output buffer coupled to the multiplexer circuitry, a second interconnect, and a third interconnect. The multiplexer circuitry is to: receive byte enable information in the input data buffer; determine that one or more of the plurality of bytes stored in the input data buffer are invalid; store an indicator in the output buffer; store valid bytes of the plurality of bytes in the output buffer to transmit on the third interconnect; and store the byte enable information in the output buffer to transmit on the third interconnect.

    Programmable Power Management Agent
    15.
    发明申请

    公开(公告)号:US20170285703A1

    公开(公告)日:2017-10-05

    申请号:US15623536

    申请日:2017-06-15

    Abstract: In an embodiment, a processor includes a first core and a power management agent (PMA), coupled to the first core, to include a static table that stores a list of operations, and a plurality of columns each to specify a corresponding flow that includes a corresponding subset of the operations. Execution of each flow is associated with a corresponding state of the first core. The PMA includes a control register (CR) that includes a plurality of storage elements to receive one of a first value and a second value. The processor includes execution logic, responsive to a command to place the first core into a first state, to execute an operation of a first flow when a corresponding storage element stores the first value and to refrain from execution of an operation of the first flow when the corresponding element stores the second value. Other embodiments are described and claimed.

    Asymmetric set combined cache
    16.
    发明授权
    Asymmetric set combined cache 有权
    不对称组合缓存

    公开(公告)号:US09582430B2

    公开(公告)日:2017-02-28

    申请号:US14671927

    申请日:2015-03-27

    Abstract: Embodiments are generally directed to an asymmetric set combined cache including a direct-mapped cache portion and a multi-way cache portion. A processor may include one or more processing cores for processing of data, and a cache memory to cache data from a main memory for the one or more processing cores, the cache memory including a first cache portion, the first cache portion including a direct-mapped cache, and a second cache portion, the second cache portion including a multi-way cache. The cache memory includes asymmetric sets in the first cache portion and the second cache portion, the first cache portion being larger than the second cache portion. A coordinated replacement policy for the cache memory provides for replacement of data in the first cache portion and the second cache portion.

    Abstract translation: 实施例通常涉及包括直接映射高速缓存部分和多路高速缓存部分的非对称集合组合高速缓存。 处理器可以包括用于处理数据的一个或多个处理核心,以及高速缓存存储器,用于从一个或多个处理核心的主存储器缓存数据,高速缓存存储器包括第一高速缓存部分,第一高速缓存部分包括直接 - 映射的高速缓存和第二高速缓存部分,所述第二高速缓存部分包括多路高速缓存。 高速缓存存储器包括第一高速缓存部分和第二高速缓存部分中的非对称集合,第一高速缓存部分大于第二高速缓存部分。 缓存存储器的协调替换策略提供了第一高速缓存部分和第二高速缓存部分中的数据的替换。

    Programmable Power Management Agent
    17.
    发明申请
    Programmable Power Management Agent 有权
    可编程电源管理代理

    公开(公告)号:US20160252952A1

    公开(公告)日:2016-09-01

    申请号:US14634777

    申请日:2015-02-28

    Abstract: In an embodiment, a processor includes a first core and a power management agent (PMA), coupled to the first core, to include a static table that stores a list of operations, and a plurality of columns each to specify a corresponding flow that includes a corresponding subset of the operations. Execution of each flow is associated with a corresponding state of the first core. The PMA includes a control register (CR) that includes a plurality of storage elements to receive one of a first value and a second value. The processor includes execution logic, responsive to a command to place the first core into a first state, to execute an operation of a first flow when a corresponding storage element stores the first value and to refrain from execution of an operation of the first flow when the corresponding element stores the second value. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括耦合到第一核的第一核和电源管理代理(PMA),以包括存储操作列表的静态表,以及多个列,以指定包括 相应的操作子集。 每个流的执行与第一核的相应状态相关联。 PMA包括控制寄存器(CR),其包括多个存储元件以接收第一值和第二值中的一个。 处理器包括执行逻辑,响应于将第一核放入第一状态的命令,当对应的存储元件存储第一值时,执行第一流的操作,并且当第一流处于 相应的元素存储第二个值。 描述和要求保护其他实施例。

    Shared read—using a request tracker as a temporary read cache

    公开(公告)号:US11422939B2

    公开(公告)日:2022-08-23

    申请号:US16727657

    申请日:2019-12-26

    Abstract: Disclosed embodiments relate to a shared read request (SRR) using a common request tracker (CRT) as a temporary cache. In one example, a multi-core system includes a memory and a memory controller to receive a SRR from a core when a Leader core is not yet identified, allocate a CRT entry and store the SRR therein, mark it as a Leader, send a read request to a memory address indicated by the SRR, and when read data returns from the memory, store the read data in the CRT entry, send the read data to the Leader core, and await receipt, unless already received, of another SRR from a Follower core, the other SRR having a same address as the SRR, then, send the read data to the Follower core, and deallocate the CRT entry.

Patent Agency Ranking