-
公开(公告)号:JP2004246898A
公开(公告)日:2004-09-02
申请号:JP2004034166
申请日:2004-02-10
Applicant: Internatl Business Mach Corp
, インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation Inventor: DAY MICHAEL NORMAN , JOHNS CHARLES RAY , KAHLE JAMES ALLAN , LIU PEICHUN PETER , SHIPPY DAVID J , TRUONG THUONG QUANG
CPC classification number: G06F12/126
Abstract: PROBLEM TO BE SOLVED: To provide a management system and a method for streaming data in a cache.
SOLUTION: A computer system 100 comprises: a processor 102; the cache 104; and a system memory 110. The processor 102 issues a data request for the streaming data. The streaming data has one or more small data portions. The system memory 110 has a specific area for storing the streaming data. The cache has a predefined area locked for the streaming data and is connected to a cache controller which is in communication with a processor 106 and the system memory 110. When at least one small data portion for the streaming data is not found in the predefined area of the cache, the small data portion is transferred to the predefined area of the cache 104 from the specific area of the system memory 110.
COPYRIGHT: (C)2004,JPO&NCIPI-
公开(公告)号:CA2142799A1
公开(公告)日:1995-11-20
申请号:CA2142799
申请日:1995-02-17
Applicant: IBM
Inventor: SHIPPY DAVID J , SHULER DAVID B
Abstract: A memory system wherein data retrieval is simultaneously initiated in both and L2 cache and main memory, which allows memory latency associated with arbitration, memory DRAM address translation, and the like to be minimized in the event that the data sought by the processor is not in the L2 cache (miss). The invention allows for any memory access to be interrupted in the storage control unit prior to any memory signals being activated. The L2 and memory access controls are in a single component, i.e. the storage control unit (SCU). Both the L2 and the memory have a unique port into the CPU which allows data to be directly transferred. This eliminates the overhead associated with storing the data in an intermediate device, such as a cache or memory controller.
-
公开(公告)号:CA1318037C
公开(公告)日:1993-05-18
申请号:CA598607
申请日:1989-05-03
Applicant: IBM
Inventor: PECHANEK GERALD G , SHIPPY DAVID J , SNEDAKER MARK C , WOODWARD SANDRA S
Abstract: An input/output bus for a data processing system which has extended addressing capabilities and a variable length handshake which accommodates the difference delays associated with various sets of logic and a two part address field which allows a bus unit and channel to be identified. The various units can disconnect from the bus during internal processing to free the bus for other activity. The unit removes the busy signal prior to dropping the data lines to allow a bus arbitration sequence to occur without slowing down the bus.
-
4.
公开(公告)号:PL176554B1
公开(公告)日:1999-06-30
申请号:PL31699894
申请日:1994-12-27
Applicant: IBM
Inventor: SHIPPY DAVID J , SHULER DAVID B
IPC: G06F12/08
Abstract: A memory system wherein data retrieval is simultaneously initiated in both and L2 cache and main memory, which allows memory latency associated with arbitration, memory DRAM address translation, and the like to be minimized in the event that the data sought by the processor is not in the L2 cache (miss). The invention allows for any memory access to be interrupted in the storage control unit prior to any memory signals being activated. The L2 and memory access controls are in a single component, i.e. the storage control unit (SCU). Both the L2 and the memory have a unique port into the CPU which allows data to be directly transferred. This eliminates the overhead associated with storing the data in an intermediate device, such as a cache or memory controller.
-
公开(公告)号:CA2103767A1
公开(公告)日:1994-05-10
申请号:CA2103767
申请日:1993-08-10
Applicant: IBM
Inventor: ARIMILLI RAVI K , MAULE WARREN E , SHIPPY DAVID J , SIEGEL DAVID W
IPC: G06F13/12 , G06F12/08 , G06F12/0846 , G06F12/0862 , G06F12/0875 , G06F13/20
Abstract: CACHE ARCHITECTURE FOR HIGH SPEED MEMORY-TO-I/O DATA TRANSFERS Computer architecture and method of control for accomplishing low speed memory to high speed I/O data transfers. An I/O cache is connected between the memory data bus and a system I/O data bus, and is responsive to a storage control unit which manages data transfers over the system I/O bus. The relatively lower speed of the system memory is offset by the larger size of the memory data bus in comparison to the system I/O data bus. The I/O cache is used to prefetch memory data during read cycles, which prefetch operates in concurrence with the transfer of previously prefetched data from the I/o cache to I/O control units on the system I/O data bus. During the writing of data from I/O to system memory, the I/O cache buffers memory access interferences initiated by the processor. The invention permits the use of a conventional and relatively slow main memory in conjunction with a high speed processor and high speed I/O system.
-
-
-
-