-
公开(公告)号:WO2005055040A3
公开(公告)日:2005-12-08
申请号:PCT/EP2004052800
申请日:2004-11-04
Applicant: IBM , IBM FRANCE , CHEN JAMES C , HUYNH MINH-NGOC L , KALOS MATTHEW J , FUNG CHUNG M
Inventor: CHEN JAMES C , HUYNH MINH-NGOC L , KALOS MATTHEW J , FUNG CHUNG M
CPC classification number: G06F3/065 , G06F3/0611 , G06F3/067 , G06F3/0689 , H04L47/10
Abstract: Methods, system and computer program product are provided to improve, the efficiency of data transfers in a PPRC environment. Any or all of three features may be implemented, each of which reduces the number of round trips required for the exchange of handshaking, data and control information. A first feature includes disabling the "transfer ready" acknowledgment which normally occurs between a primary storage controller and a secondary storage controller. A second feature includes pre-allocating payload and data buffers in the secondary storage controller. A third feature includes packaging write control information with a write command in an extended command descriptor block (CDB). Such a step eliminated the need for a separate transmission of the write control information. The CDB is transmitted along with a data block from the primary storage controller to the secondary storage controller and placed in the respective, pre-allocated buffers. Data may also be pipelined to the secondary. By decreasing the response time for data transfers, the distance separating the primary and secondary storage controllers may be increased.
Abstract translation: 提供了方法,系统和计算机程序产品,以提高PPRC环境中数据传输的效率。 可以实现三个特征中的任何一个或全部特征,其中每个特征都减少交换握手,数据和控制信息所需的往返次数。 第一个特征包括禁用通常在主存储控制器和辅助存储控制器之间发生的“传输就绪”确认。 第二个功能包括在辅助存储控制器中预先分配有效负载和数据缓冲区。 第三特征包括利用写入命令在扩展命令描述符块(CDB)中打包写入控制信息。 这样的步骤消除了对写入控制信息的单独传输的需要。 CDB与数据块一起从主存储控制器传输到辅助存储控制器,并放置在各自的预先分配的缓冲区中。 数据也可以流水到次要位置。 通过减少数据传输的响应时间,可以增加分隔主存储控制器和辅助存储控制器的距离。
-
公开(公告)号:AT503219T
公开(公告)日:2011-04-15
申请号:AT04798160
申请日:2004-11-04
Applicant: IBM
Inventor: CHEN JAMES C , HUYNH MINH-NGOC L , KALOS MATTHEW J , FUNG CHUNG M
Abstract: Methods, system and computer program product are provided to improve the efficiency of data transfers in a PPRC environment. Any or all of three features may be implemented, each of which reduces the number of round trips required for the exchange of handshaking, data and control information. A first feature includes disabling the "transfer ready" acknowledgment which normally occurs between a primary storage controller and a secondary storage controller. A second feature includes pre-allocating payload and data buffers in the secondary storage controller. A third feature includes packaging write control information with a write command in an extended command descriptor block (CDB). Such a step eliminated the need for a separate transmission of the write control information. The CDB is transmitted along with a data block from the primary storage controller to the secondary storage controller and placed in the respective, pre-allocated buffers. Data may also be pipelined to the secondary. By decreasing the response time for data transfers, the distance separating the primary and secondary storage controllers may be increased.
-
公开(公告)号:DE69504918D1
公开(公告)日:1998-10-29
申请号:DE69504918
申请日:1995-06-16
Applicant: IBM
Inventor: CHEN JAMES C , GLIDER JOSEPH S , SHIPMAN LLOYD R , STAMNESS JESSE I
Abstract: The present system may be utilized to minimize access performance penalties in memory subsystems which utilize redundant arrays of disk memory devices. Redundant arrays of disk memory devices provide levels of reliability which are not available with single storage devices; however, the redundancy carries with it an access performance degradation due to the requirement that such systems write data segments and parity elements to the array each time an application updates data within the system. A large nonvolatile cache is therefore provided in association with a redundant array of disk memory devices. Each time a data segment is written or read the data segment is staged from the array to the nonvolatile cache, if the data segment is not already within the cache. Additionally, if the operation is an update, a parity element associated with the data segment to be updated is also staged to the cache with the existing data segment content. An updated parity element is then calculated based upon the updated data, the existing data and the existing parity element. Data segments and associated parity elements are then maintained in the cache for future reading and updates until the number of updated data segments within the cache exceeds a predetermined value. Thereafter, selected data segments and associated parity elements are destaged from the cache to the array based upon a "Least Recently Utilized" (LRU) or "minimum seek" algorithm.
-
公开(公告)号:DE602004031972D1
公开(公告)日:2011-05-05
申请号:DE602004031972
申请日:2004-11-04
Applicant: IBM
Inventor: CHEN JAMES C , HUYNH MINH-NGOC L , KALOS MATTHEW J , FUNG CHUNG M
-
公开(公告)号:DE69504918T2
公开(公告)日:1999-05-27
申请号:DE69504918
申请日:1995-06-16
Applicant: IBM
Inventor: CHEN JAMES C , GLIDER JOSEPH S , SHIPMAN LLOYD R , STAMNESS JESSE I
Abstract: The present system may be utilized to minimize access performance penalties in memory subsystems which utilize redundant arrays of disk memory devices. Redundant arrays of disk memory devices provide levels of reliability which are not available with single storage devices; however, the redundancy carries with it an access performance degradation due to the requirement that such systems write data segments and parity elements to the array each time an application updates data within the system. A large nonvolatile cache is therefore provided in association with a redundant array of disk memory devices. Each time a data segment is written or read the data segment is staged from the array to the nonvolatile cache, if the data segment is not already within the cache. Additionally, if the operation is an update, a parity element associated with the data segment to be updated is also staged to the cache with the existing data segment content. An updated parity element is then calculated based upon the updated data, the existing data and the existing parity element. Data segments and associated parity elements are then maintained in the cache for future reading and updates until the number of updated data segments within the cache exceeds a predetermined value. Thereafter, selected data segments and associated parity elements are destaged from the cache to the array based upon a "Least Recently Utilized" (LRU) or "minimum seek" algorithm.
-
-
-
-