MANAGING GLOBAL CACHE COHERENCY IN A DISTRIBUTED SHARED CACHING FOR CLUSTERED FILE SYSTEMS
    62.
    发明申请
    MANAGING GLOBAL CACHE COHERENCY IN A DISTRIBUTED SHARED CACHING FOR CLUSTERED FILE SYSTEMS 有权
    管理集群文件系统的分布式共享缓存中的全局高速缓存

    公开(公告)号:US20140181162A1

    公开(公告)日:2014-06-26

    申请号:US14132996

    申请日:2013-12-18

    Abstract: Systems. Methods, and Computer Program Products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (CFS). The CFS manages access permissions to an entire space of data segments by using the DSM module. In response to receiving a request to access one of the data segments, a calculation operation is performed for obtaining most recent contents of one of the data segments. The calculation operation performs one of providing the most recent contents via communication with a remote DSM module which obtains the one of the data segments from an associated external cache memory, instructing by the DSM module to read from storage the one of the data segments, and determining that any existing contents of the one of the data segments in the local external cache are the most recent contents.

    Abstract translation: 系统。 提供了方法和计算机程序产品,用于在群集文件系统(CFS)的分布式共享缓存中管理全局高速缓存一致性。 CFS通过使用DSM模块管理数据段的整个空间的访问权限。 响应于接收到访问数据段之一的请求,执行用于获得数据段之一的最新内容的计算操作。 计算操作执行通过与远程DSM模块的通信来提供最新内容之一,所述远程DSM模块从相关联的外部高速缓冲存储器获取数据段中的一个,由DSM模块指示从存储器读取数据段之一,以及 确定本地外部高速缓存中的一个数据段的任何现有内容是最新的内容。

    DURABLE TRANSACTIONS WITH STORAGE-CLASS MEMORY
    67.
    发明申请
    DURABLE TRANSACTIONS WITH STORAGE-CLASS MEMORY 审中-公开
    具有存储级存储器的可执行交易

    公开(公告)号:US20140075086A1

    公开(公告)日:2014-03-13

    申请号:US13614735

    申请日:2012-09-13

    CPC classification number: G06F11/00 G06F9/467 G06F12/0828 G06F12/0891

    Abstract: A method for conducting memory transactions includes receiving a transaction. The steps of the received transaction are performed in a memory buffer. A state of the memory buffer cache lines is set as pending and unstored while the transaction is in progress. After all steps have been successfully performed, the state of the memory buffer cache lines are changed to complete and unstored. When it is determined that the memory buffer cache lines are to be written to the non-volatile main memory, the contents is written to the non-volatile main memory. The state of the memory buffer cache lines are then changed to complete and stored. When the memory buffer cache lines are in the complete and unstored state, access to modify their content is restricted.

    Abstract translation: 用于执行存储器事务的方法包括接收事务。 接收的事务的步骤在存储器缓冲器中执行。 内存缓冲区高速缓存行的状态在事务进行中被设置为挂起状态和未挂起状态。 在所有步骤成功执行之后,内存缓冲区高速缓存行的状态将更改为完成和未归档。 当确定要将内存缓冲器高速缓存行写入非易失性主存储器时,将内容写入非易失性主存储器。 然后更改内存缓冲区高速缓存行的状态以完成并存储。 当内存缓冲区高速缓存行处于完全状态并且不存在状态时,修改其内容的访问受到限制。

    Snoop filter and non-inclusive shared cache memory

    公开(公告)号:US20130042078A1

    公开(公告)日:2013-02-14

    申请号:US13137359

    申请日:2011-08-08

    Abstract: A data processing apparatus 2 includes a plurality of transaction sources 8, 10 each including a local cache memory. A shared cache memory 16 stores cache lines of data together with shared cache tag values. Snoop filter circuitry 14 stores snoop filter tag values tracking which cache lines of data are stored within the local cache memories. When a transaction is received for a target cache line of data, then the snoop filter circuitry 14 compares the target tag value with the snoop filter tag values and the shared cache circuitry 16 compares the target tag value with the shared cache tag values. The shared cache circuitry 16 operates in a default non-inclusive mode. The shared cache memory 16 and the snoop filter 14 accordingly behave non-inclusively in respect of data storage within the shared cache memory 16, but inclusively in respect of tag storage given the combined action of the snoop filter tag values and the shared cache tag values. Tag maintenance operations moving tag values between the snoop filter circuitry 14 and the shared cache memory 16 are performed atomically. The snoop filter circuitry 14 and the shared cache memory 16 compare operations are performed using interlocked parallel pipelines.

Patent Agency Ranking