-
公开(公告)号:DE112013006339T5
公开(公告)日:2015-09-17
申请号:DE112013006339
申请日:2013-12-02
Applicant: IBM
IPC: H03M7/40
Abstract: Es wird ein Mechanismus in einem Datenverarbeitungssystem für eine pipeline-artige Kompression von Mehrfach-Byte-Frames vorgestellt. Der Mechanismus kombiniert einen aktuellen Zyklus von Daten in einem Eingangsdatenstrom mit mindestens einem nächsten Zyklus von Daten in dem Eingangsdatenstrom, um einen Frame von Daten zu Bilden. Der Mechanismus stellt eine Mehrzahl von Übereinstimmungen in einer Mehrzahl von Directory-Speichern fest. Der Mechanismus stellt einen Teilsatz von Übereinstimmungen in der Mehrzahl der Übereinstimmungen fest, der eine beste Abdeckung des aktuellen Zyklus von Daten darstellt. Der Mechanismus verschlüsselt den Frame von Daten in einen verschlüsselten Ausgangsdatenstrom.
-
公开(公告)号:CA2505610A1
公开(公告)日:2004-06-24
申请号:CA2505610
申请日:2003-11-21
Applicant: IBM
Inventor: KAHLE JAMES ALLAN , TRUONG THUONG QUANG , JOHNS CHARLES RAY , SHIPPY DAVID , HOFSTEE HARM PETER , DAY MICHAEL NORMAN
Abstract: Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CP U can identify the subset of address translation information stored in the cac he.
-
公开(公告)号:CA2505610C
公开(公告)日:2009-06-23
申请号:CA2505610
申请日:2003-11-21
Applicant: IBM
Inventor: DAY MICHAEL NORMAN , HOFSTEE HARM PETER , TRUONG THUONG QUANG , KAHLE JAMES ALLAN , JOHNS CHARLES RAY , SHIPPY DAVID
Abstract: Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CP U can identify the subset of address translation information stored in the cac he.
-
公开(公告)号:DE602006003869D1
公开(公告)日:2009-01-08
申请号:DE602006003869
申请日:2006-01-25
Applicant: IBM
Inventor: TAKAHASHI OSAMU , HOFSTEE HARM PETER , FLACHS BRIAN KING , DHONG SANG HOO
IPC: G06F13/16
Abstract: A system for a processor with memory with combined line and word access is presented. A system performs narrow read/write memory accesses and wide read/write memory accesses to the same memory bank using multiplexers and latches to direct data. The system processes 16 byte load/sore requests using a narrow read/write memory access and also processes 128 byte DMA and instruction fetch requests using a wide read/write memory access. During DMA requests, the system writes/reads sixteen DMA operations to memory on one instruction cycle. By doing this, the memory is available to process load/store or instruction fetch requests during fifteen other instruction cycles.
-
公开(公告)号:CA2367793A1
公开(公告)日:2002-08-22
申请号:CA2367793
申请日:2002-01-15
Applicant: IBM
Inventor: NAIR RAVI , HOFSTEE HARM PETER
Abstract: According to one embodiment, a multiprocessing system includes a first processor, a second processor, and compare logic. The first processor is operable to compute fir st results responsive to instructions, the second processor is operable to compute second results responsive to the instructions, and the compare logic is operable to check at checkpoints for matching of the results. Each of the processors has a first register for storing one of the processor 's results, and the register has a stack of shadow registers. The processor is operable to shift a curren t one of the processor's results from the first register into the top shadow register, so that an earlier one of the processor's results can be restored from one of the shadow registers to the first regist er responsive to the compare logic determining that the first and second results mismatch. It is advantageous that the shadow register stack is closely coupled to its corresponding register, which provides for fast restoration of results. In a further aspect of an embodiment, each processor has a signatur e generator and a signature storage unit. The signature generator and storage units are operab le to cooperatively compute a cumulative signature for a sequence of the processor's results, an d the processor is operable to store the cumulative signature in the signature storage unit pending the match or mismatch determination by the compare logic. The checking for matching of th e results includes the compare logic comparing the cumulative signatures of each respective processor. It is faster, and therefore advantageous, to check respective cumulative signatures at interva ls rather than to check each individual result.
-
公开(公告)号:GB2521082A
公开(公告)日:2015-06-10
申请号:GB201506285
申请日:2013-12-02
Applicant: IBM
Abstract: A mechanism is provided in a data processing system for pipelined compression of multi- byte frames. The mechanism combines a current cycle of data in an input data stream with at least a portion of a next cycle of data in the input data stream to form a frame of data. The mechanism identifies a plurality of matches in a plurality of dictionary memories. Each match matches a portion of a given substring in the frame of data. The mechanism identifies a subset of matches from the plurality of matches that provides a best coverage of the current cycle of data. The mechanism encodes the frame of data into an encoded output data stream.
-
公开(公告)号:GB2489562A
公开(公告)日:2012-10-03
申请号:GB201204629
申请日:2012-03-16
Applicant: IBM
Inventor: LI JIAN , CRAIK CHRISTOPHER , HOFSTEE HARM PETER , JAMSEK DAMIR ANTHONY
Abstract: A method and computer program is provided for performing approximate run-ahead computations. A first group of compute engines (330) is selected to execute full computations (305) on a full set of input data (360). A second, preferably smaller, group of compute engines (340) is selected to execute computations which may approximate on a on a sampled subset of the input data. A third group of compute engines (350) is selected to compute a difference in computation results between first computation results generated by the first group of compute engines and second computation results generated by the second group of compute engines. The second group of compute engines is then reconfigured based on the difference generated by the third group of compute engines. Reconfiguration of the second group may be based on the accuracy of the approximate computations and a measure of confidence using available compute engines (370). Selection of the compute engines may be based on performance capabilities, current workloads or physical affinity. Compute engines may be functional units within a processor, processing devices or special purpose accelerators.
-
公开(公告)号:DE60320026T2
公开(公告)日:2009-05-14
申请号:DE60320026
申请日:2003-11-21
Applicant: IBM
Inventor: DAY MICHAEL NORMAN , HOFSTEE HARM PETER , JOHNS CHARLES RAY , KAHLE JAMES ALLAN , TRUONG THUONG QUANG , SHIPPY DAVID
Abstract: Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CPU can identify the subset of address translation information stored in the cache.
-
公开(公告)号:AT415664T
公开(公告)日:2008-12-15
申请号:AT06707835
申请日:2006-01-25
Applicant: IBM
Inventor: TAKAHASHI OSAMU , HOFSTEE HARM PETER , FLACHS BRIAN KING , DHONG SANG HOO
IPC: G06F13/16
Abstract: A system for a processor with memory with combined line and word access is presented. A system performs narrow read/write memory accesses and wide read/write memory accesses to the same memory bank using multiplexers and latches to direct data. The system processes 16 byte load/sore requests using a narrow read/write memory access and also processes 128 byte DMA and instruction fetch requests using a wide read/write memory access. During DMA requests, the system writes/reads sixteen DMA operations to memory on one instruction cycle. By doing this, the memory is available to process load/store or instruction fetch requests during fifteen other instruction cycles.
-
公开(公告)号:DE60320026D1
公开(公告)日:2008-05-08
申请号:DE60320026
申请日:2003-11-21
Applicant: IBM
Inventor: DAY MICHAEL NORMAN , HOFSTEE HARM PETER , JOHNS CHARLES RAY , KAHLE JAMES ALLAN , TRUONG THUONG QUANG , SHIPPY DAVID
Abstract: Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CPU can identify the subset of address translation information stored in the cache.
-
-
-
-
-
-
-
-
-