Multiprocessor system and method therefor
    1.
    发明专利
    Multiprocessor system and method therefor 审中-公开
    多处理器系统及其方法

    公开(公告)号:JPH11272637A

    公开(公告)日:1999-10-08

    申请号:JP3245199

    申请日:1999-02-10

    CPC classification number: G06F9/524

    Abstract: PROBLEM TO BE SOLVED: To obtain a measure to advance the operation of function stopping storage by generating a signal showing a function stopping situation when a storing request is function-stopped, to transmit to all the processors to allow all the processors to postpone loading to a memory or the transmission of a reading request.
    SOLUTION: A circuit 107 monitors the storing request 109 within processors 101 to 103. Then, in the case of amounting to a designing threshold value within an one optional processor 101 to 103, a process shifts to a prescribed step, where a Store-Stalled signal is asserted and transmitted to all the processors 101 to 103. After then, the processors 101 to 103 cancel (postpone) the transmission of the reading request 110 in response to the reception of this Store-Stalled signal by delay loading request circuits 108 in the respective processors 101 to 103.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:为了获得通过产生表示存储请求功能停止时的功能停止状态的信号来推进功能停止存储的操作的措施,向所有处理器发送以允许所有处理器推迟加载 存储器或读取请求的传输。 解决方案:电路107监视处理器101至103内的存储请求109.然后,在一个可选处理器101至103内的设计阈值的情况下,处理转移到规定的步骤,其中Store-Stalled 信号被断言并被发送到所有处理器101到103.然后,处理器101至103响应于通过延迟加载请求电路108接收到该存储停顿信号而取消(延迟)读取请求110的发送 相应的处理器101至103。

    INSTRUCTION PREFETCH METHOD FOR CACHE CONTROL AND SYSTEM

    公开(公告)号:JP2003186741A

    公开(公告)日:2003-07-04

    申请号:JP2002346968

    申请日:2002-11-29

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide a method that selectively prefetches a line M+1 from an L2 cache or a main memory to an L1 instruction cache when executing a line M. SOLUTION: If an unresolved branch exists in the unsettled line M, the line M+1 being predictive is prefetched not from the main memory but from the L2 cache to the L1 instruction cache. The unresolved branch in the unsettled line M is resolved before the line M+1 is prefetched from the main memory. If an unresolved branch does not exist, the line M is committed and prefetched from the main memory. As above, potentially useless prefetching is not performed, thus holding the bandwidth of the main memory. COPYRIGHT: (C)2003,JPO

    3.
    发明专利
    未知

    公开(公告)号:DE69616223T2

    公开(公告)日:2002-06-27

    申请号:DE69616223

    申请日:1996-07-22

    Applicant: IBM

    Abstract: A system and method to use stream filters to defer deallocation of a stream based on the activity level of the stream, thereby preventing a stream thrashing situation from occurring. The least recently used ("LRU") stream is deallocated only after a number of potential new streams are detected. In a data processing system, a method for prefetching cache lines from a main memory to an L1 cache coupled to a processor coupled by a bus to the main memory, wherein the prefetching is augmented with the utilization of a stream buffer and a stream filter, wherein the stream buffer includes an address buffer and a data buffer, wherein the stream buffer hold one or more active streams, and wherein the stream filter contains one or more entries corresponding to one or more active streams, the method comprising the steps of monitoring a sequence of L1 cache misses; replacing entries in the stream filter in response to the L1 cache misses on an LRU basis; and maintaining one of the one or more active streams in the stream buffer until all of the one or more entries corresponding to the one of the one or more active streams have been replaced by the replacing step.

    4.
    发明专利
    未知

    公开(公告)号:DE69616465D1

    公开(公告)日:2001-12-06

    申请号:DE69616465

    申请日:1996-07-24

    Applicant: IBM

    Abstract: Within a data processing system implementing L1 and L2 caches and stream filters and buffers, prefetching of cache lines is performed in a progressive manner. In one mode, data may not be prefetched. In a second mode, two cache lines are prefetched wherein one line is prefetched into the L1 cache and the next line is prefetched into a stream buffer. In a third mode, more than two cache lines are prefetched at a time.

    5.
    发明专利
    未知

    公开(公告)号:DE69616223D1

    公开(公告)日:2001-11-29

    申请号:DE69616223

    申请日:1996-07-22

    Applicant: IBM

    Abstract: A system and method to use stream filters to defer deallocation of a stream based on the activity level of the stream, thereby preventing a stream thrashing situation from occurring. The least recently used ("LRU") stream is deallocated only after a number of potential new streams are detected. In a data processing system, a method for prefetching cache lines from a main memory to an L1 cache coupled to a processor coupled by a bus to the main memory, wherein the prefetching is augmented with the utilization of a stream buffer and a stream filter, wherein the stream buffer includes an address buffer and a data buffer, wherein the stream buffer hold one or more active streams, and wherein the stream filter contains one or more entries corresponding to one or more active streams, the method comprising the steps of monitoring a sequence of L1 cache misses; replacing entries in the stream filter in response to the L1 cache misses on an LRU basis; and maintaining one of the one or more active streams in the stream buffer until all of the one or more entries corresponding to the one of the one or more active streams have been replaced by the replacing step.

    6.
    发明专利
    未知

    公开(公告)号:DE69616465T2

    公开(公告)日:2002-05-02

    申请号:DE69616465

    申请日:1996-07-24

    Applicant: IBM

    Abstract: Within a data processing system implementing L1 and L2 caches and stream filters and buffers, prefetching of cache lines is performed in a progressive manner. In one mode, data may not be prefetched. In a second mode, two cache lines are prefetched wherein one line is prefetched into the L1 cache and the next line is prefetched into a stream buffer. In a third mode, more than two cache lines are prefetched at a time.

Patent Agency Ranking