SPLIT TRANSACTIONS AND PIPELINED ARBITRATION OF MICROPROCESSORS IN MULTIPROCESSING COMPUTER SYSTEMS
    1.
    发明申请
    SPLIT TRANSACTIONS AND PIPELINED ARBITRATION OF MICROPROCESSORS IN MULTIPROCESSING COMPUTER SYSTEMS 审中-公开
    微处理器在多计算机系统中的分离交易和管道仲裁

    公开(公告)号:WO1994008304A1

    公开(公告)日:1994-04-14

    申请号:PCT/US1993009369

    申请日:1993-09-29

    CPC classification number: G06F13/364

    Abstract: Three prioritization schemes for determining which of several CPUs receives priority to become bus master of a host bus in a multiprocessor system, and an arbitration scheme for transferring control from one bus master to another. Each prioritization scheme prioritizes n elements, where a total of (n/2)x(n-1) priority bits monitors the relative priority between each pair of elements. An element receives the highest priority when each of the n-1 priority bits associated with that element points to it. In the arbitration scheme, the current bus master of the host bus determines when transfer of control of the host bus occurs as governed by one of the prioritization schemes. The arbitration scheme gives EISA bus masters, RAM refresh and DMA greater priority than CPUs acting as bus masters, and allows a temporary bus master to interrupt the current bus master to perform a write-back cache intervention cycle. The arbitration scheme also supports address pipelining, bursting, split transactions and reservations of CPUs aborted when attempting a locked cycle. Address pipelining allows the next bus master to assert its address and status signals before the beginning of the data transfer phase of the next bus master. Split transactions allow a CPU posting a read to the EISA bus to arbitrate the host bus to another device without re-arbitrating for the host bus to retrieve the data. The data is asserted on the host bus when it is idle even if the host bus is being controlled by another device.

    Abstract translation: 用于确定多个CPU中的哪一个接收优先权以成为多处理器系统中的主机总线的总线的三个优先级方案,以及用于将控制从一个总线主机传送到另一个总线主机的仲裁方案。 每个优先排序方案优先考虑n个元素,其中总共(n / 2)x(n-1)个优先级位监视每对元素之间的相对优先级。 当与该元素相关联的n-1个优先级位中的每一个指向它时,元素接收最高优先级。 在仲裁方案中,主机总线的当前总线主机确定主机总线的控制何时发生,由优先级排列方案之一决定。 仲裁方案给予EISA总线主机,RAM刷新和DMA优先于作为总线主机的CPU,并允许临时总线主机中断当前总线主机以执行回写高速缓存干预周期。 仲裁方案还支持在尝试锁定循环时中止CPU的地址流水线,突发,拆分事务和预留。 地址流水线允许下一个总线主机在下一个总线主机的数据传输阶段开始之前断言其地址和状态信号。 拆分事务允许CPU向EISA总线发布读取,以将主机总线仲裁到另一个设备,而不需要重新仲裁主机总线来检索数据。 即使主机总线被其他设备控制,数据在空闲时也在主机总线上被断言。

    MULTIPLEXED COMMUNICATION PROTOCOL BETWEEN CENTRAL AND DISTRIBUTED PERIPHERALS IN MULTIPROCESSOR COMPUTER SYSTEMS
    2.
    发明申请
    MULTIPLEXED COMMUNICATION PROTOCOL BETWEEN CENTRAL AND DISTRIBUTED PERIPHERALS IN MULTIPROCESSOR COMPUTER SYSTEMS 审中-公开
    多处理器计算机系统中的中心和分布式外设之间的多路通信协议

    公开(公告)号:WO1994008307A1

    公开(公告)日:1994-04-14

    申请号:PCT/US1993009427

    申请日:1993-09-30

    CPC classification number: G06F13/4217 G06F13/32

    Abstract: A multiplexed communication protocol for broadcasting interrupt, DMA and other miscellaneous data across a bus from a central peripheral device to a plurality of distributed peripheral devices associated with each processor in a multiprocessor computer system. The multiplexed bus includes a data portion and a status portion, where the status portion indicates one of several different cycle types executed on the bus, and where each cycle type further indicates the data asserted on the data portion. The cycle types further include address and data read and write cycles to allow access of the registers in the distributed devices via the multiplexed bus. Thus, system interrupt, address, data, DMA, NMI and miscellaneous cycles are defined where a system interrupt cycle is continually executed on consecutive cycles until interrupted by a request to execute another cycle type. The cycle sequence is implemented to insert system interrupt cycles between the address and data cycles to prevent significant channel latency when system interrupts occur.

    Abstract translation: 一种多路复用通信协议,用于在多处理器计算机系统中从中央外围设备到与多个处理器计算机系统中的每个处理器相关联的多个分布式外围设备的总线上广播中断,DMA和其他杂项数据。 多路复用总线包括数据部分和状态部分,其中状态部分指示在总线上执行的若干不同周期类型之一,并且其中每个周期类型进一步指示在数据部分上断言的数据。 循环类型还包括地址和数据读取和写入周期,以允许通过多路复用总线访问分布式设备中的寄存器。 因此,定义系统中断,地址,数据,DMA,NMI和其他周期,其中系统中断周期在连续循环中连续执行,直到被执行另一个循环类型的请求中断。 实施循环序列以在地址和数据周期之间插入系统中断周期,以防止系统中断发生时的重要通道延迟。

    ARRANGEMENT OF DMA, INTERRUPT AND TIMER FUNCTIONS TO IMPLEMENT SYMMETRICAL PROCESSING IN A MULTIPROCESSOR COMPUTER SYSTEM
    3.
    发明申请
    ARRANGEMENT OF DMA, INTERRUPT AND TIMER FUNCTIONS TO IMPLEMENT SYMMETRICAL PROCESSING IN A MULTIPROCESSOR COMPUTER SYSTEM 审中-公开
    DMA,中断和定时器功能的布置在多处理器计算机系统中实现对称处理

    公开(公告)号:WO1994008313A1

    公开(公告)日:1994-04-14

    申请号:PCT/US1993009410

    申请日:1993-09-29

    CPC classification number: G06F15/8015 G06F13/24 G06F13/32

    Abstract: An arrangement of direct memory access (DMA), interrupt and timer functions in a multiprocessor computer system to allow symmetrical processing. Several functions which are considered common to all of the CPUs and those which are conveniently accessed through an expansion bus remain in a central system peripheral chip coupled to the expansion bus. These central functions include the primary portions of the DMA controller and arbitration circuitry to control access of the expansion bus. A distributed peripheral, including a programmable interrupt controller, multiprocessor interrupt logic, nonmaskable interrupt logic, local DMA logic and timer functions, is provided locally for each CPU. A bus is provided between the central and distributed peripherals to allow the central peripheral to broadcast information to the CPUs, and to provide local information from the distributed chip to the central peripheral when the local CPU is programming or accessing functions in the central peripheral.

    Abstract translation: 在多处理器计算机系统中安排直接存储器访问(DMA),中断和定时器功能,以允许对称处理。 被认为是所有CPU通用的几个功能以及通过扩展总线方便地访问的那些功能保留在耦合到扩展总线的中央系统外围芯片中。 这些中心功能包括DMA控制器的主要部分和控制扩展总线访问的仲裁电路。 本地为每个CPU提供了分布式外设,包括可编程中断控制器,多处理器中断逻辑,不可屏蔽中断逻辑,本地DMA逻辑和定时器功能。 在中央和分布式外设之间提供总线,以允许中央外设向CPU广播信息,并且当本地CPU在中央外设中编程或访问功能时,将分布式芯片的本地信息提供给中央外设。

    RESERVATION OVERRIDING NORMAL PRIORITIZATION OF MICROPROCESSORS IN MULTIPROCESSING COMPUTER SYSTEMS
    4.
    发明申请
    RESERVATION OVERRIDING NORMAL PRIORITIZATION OF MICROPROCESSORS IN MULTIPROCESSING COMPUTER SYSTEMS 审中-公开
    多媒体计算机系统微处理器的预处理超预期

    公开(公告)号:WO1994008302A1

    公开(公告)日:1994-04-14

    申请号:PCT/US1993009362

    申请日:1993-09-29

    CPC classification number: G06F13/36 G06F13/362

    Abstract: Three prioritization schemes for determining which of several CPUs receives priority to become bus master of a host bus in a multiprocessor system, and an arbitration scheme for transferring control from one bus master to another. Each prioritization scheme prioritizes n elements, where a total of (n/2)x(n-1) priority bits monitors the relative priority between each pair of elements. An element receives the highest priority when each of the n-1 priority bits associated with that element points to it. In the arbitration scheme, the current bus master of the host bus determines when transfer of control of the host bus occurs as governed by one of the prioritization schemes. The arbitration scheme gives EISA bus masters, RAM refresh and DMA greater priority than CPUs acting as bus masters, and allows a temporary bus master to interrupt the current bus master to perform a write-back cache intervention cycle. The arbitration scheme also supports address pipelining, bursting, split transactions and reservations of CPUs aborted when attempting a locked cycle. Address pipelining allows the next bus master to assert its address and status signals before the beginning of the data transfer phase of the next bus master. Split transactions allow a CPU posting a read to the EISA bus to arbitrate the host bus to another device without re-arbitrating for the host bus to retrieve the data. The data is asserted on the host bus when it is idle even if the host bus is being controlled by another device.

    Abstract translation: 用于确定多个CPU中的哪一个接收优先权以成为多处理器系统中的主机总线的总线的三个优先级方案,以及用于将控制从一个总线主机传送到另一个总线主机的仲裁方案。 每个优先排序方案优先考虑n个元素,其中总共(n / 2)x(n-1)个优先级位监视每对元素之间的相对优先级。 当与该元素相关联的n-1个优先级位中的每一个指向它时,元素接收最高优先级。 在仲裁方案中,主机总线的当前总线主机确定主机总线的控制何时发生,由优先级排列方案之一决定。 仲裁方案给予EISA总线主机,RAM刷新和DMA优先于作为总线主机的CPU,并允许临时总线主机中断当前总线主机以执行回写高速缓存干预周期。 仲裁方案还支持在尝试锁定循环时中止CPU的地址流水线,突发,拆分事务和预留。 地址流水线允许下一个总线主机在下一个总线主机的数据传输阶段开始之前断言其地址和状态信号。 拆分事务允许CPU向EISA总线发布读取,以将主机总线仲裁到另一个设备,而不需要重新仲裁主机总线来检索数据。 即使主机总线被其他设备控制,数据在空闲时也在主机总线上被断言。

    APPARATUS FOR STRICTLY ORDERED INPUT/OUTPUT OPERATIONS FOR INTERRUPT SYSTEM INTEGRITY
    5.
    发明申请
    APPARATUS FOR STRICTLY ORDERED INPUT/OUTPUT OPERATIONS FOR INTERRUPT SYSTEM INTEGRITY 审中-公开
    用于中断系统完整性的严格命令输入/输出操作的设备

    公开(公告)号:WO1994008300A1

    公开(公告)日:1994-04-14

    申请号:PCT/US1993009411

    申请日:1993-09-30

    CPC classification number: G06F13/24

    Abstract: A method and apparatus which maintains strict ordering of processor cycles to guarantee that a processor write, such as an EOI instruction, is not executed to the interrupt controller prior to the interrupt request from a requesting device being cleared at the interrupt controller, thus maintaining system integrity. Interrupt controller logic is included on each respective CPU board. The processor can access the interrupt controller over a local bus without having to access the host bus or the expansion bus and thus an interrupt controller access could be completed before a previously generated I/O cycle has completed. Therefore, the apparatus which tracks expansion bus cycles and interrupt controller accesses and maintains strict ordering of these cycles to guarantee that an interrupt request is cleared at the interrupt controller prior to execution of write operation to the interrupt controller.

    Abstract translation: 一种保持处理器周期的严格排序以保证在来自请求设备在中断控制器处被清除的中断请求之前不向中断控制器执行诸如EOI指令之类的处理器写入的方法和装置,因此维护系统 完整性。 中断控制器逻辑包含在每个相应的CPU板上。 处理器可以通过本地总线访问中断控制器,而无需访问主机总线或扩展总线,因此在先前生成的I / O周期完成之前可以完成中断控制器访问。 因此,跟踪扩展总线周期和中断控制器的设备访问并维护这些周期的严格排序,以保证在执行到中断控制器的写操作之前在中断控制器处清除中断请求。

    DOUBLE BUFFERING OPERATIONS BETWEEN THE MEMORY BUS AND THE EXPANSION BUS OF A COMPUTER SYSTEM
    6.
    发明申请
    DOUBLE BUFFERING OPERATIONS BETWEEN THE MEMORY BUS AND THE EXPANSION BUS OF A COMPUTER SYSTEM 审中-公开
    存储器总线与计算机系统的扩展总线之间的双重缓冲操作

    公开(公告)号:WO1994008296A1

    公开(公告)日:1994-04-14

    申请号:PCT/US1993009366

    申请日:1993-09-29

    CPC classification number: G06F13/1673 G06F12/0215 G06F13/4018

    Abstract: Double buffering operations to reduce host bus hold times when an expansion bus master is accessing the main memory on a host bus of a computer system. A system data buffer coupled between the main memory and the expansion bus includes 256-bit double read and write buffers. A memory controller coupled to the double read and write buffers and to the expansion bus includes primary and secondary address latches corresponding to the double buffers. The memory controller detects access to the main memory, compares the expansion bus address with the primary and secondary addresses and controls the double read and write buffers and the primary and secondary address latches accordingly. During write operations, data to be written to the same line of memory is written to a first of the double write buffers until a write occurs to an address to a different line before data is transferred to main memory. During read operations, a full line is loaded into a first of the double read buffers, and the next full line is retrieved into a second read buffer from main memory if a subsequent read hit occurs in the first read buffer.

    Abstract translation: 双缓存操作,以减少扩展总线主机访问计算机系统的主机总线上的主存储器时的主机总线保持时间。 耦合在主存储器和扩展总线之间的系统数据缓冲器包括256位双重读写缓冲器。 耦合到双读和写缓冲器和扩展总线的存储器控​​制器包括对应于双缓冲器的主地址和副地址锁存器。 存储器控制器检测对主存储器的访问,将扩展总线地址与主地址和辅助地址进行比较,并相应地控制双读和写缓冲器以及主地址和副地址锁存器。 在写入操作期间,要写入同一行存储器的数据被写入双写缓冲器中的第一个,直到在将数据传送到主存储器之前写入到不同行的写入。 在读取操作期间,如果在第一个读取缓冲器中发生后续读取命中,则将全行加载到第一个双重读取缓冲区中,并且将下一个完整行从主存储器检索到第二个读取缓冲区中。

    SPLIT TRANSACTIONS AND PIPELINED ARBITRATION OF MICROPROCESSORS IN MULTIPROCESSING COMPUTER SYSTEMS
    7.
    发明公开
    SPLIT TRANSACTIONS AND PIPELINED ARBITRATION OF MICROPROCESSORS IN MULTIPROCESSING COMPUTER SYSTEMS 失效
    分手交易及微处理器多处理器计算机PIPELINEARBITRIERUNG。

    公开(公告)号:EP0664032A1

    公开(公告)日:1995-07-26

    申请号:EP93924909.0

    申请日:1993-09-29

    CPC classification number: G06F13/364

    Abstract: Three prioritization schemes for determining which of several CPUs receives priority to become bus master of a host bus in a multiprocessor system, and an arbitration scheme for transferring control from one bus master to another. Each prioritization scheme prioritizes n elements, where a total of (n/2)x(n-1) priority bits monitors the relative priority between each pair of elements. An element receives the highest priority when each of the n-1 priority bits associated with that element points to it. In the arbitration scheme, the current bus master of the host bus determines when transfer of control of the host bus occurs as governed by one of the prioritization schemes. The arbitration scheme gives EISA bus masters, RAM refresh and DMA greater priority than CPUs acting as bus masters, and allows a temporary bus master to interrupt the current bus master to perform a write-back cache intervention cycle. The arbitration scheme also supports address pipelining, bursting, split transactions and reservations of CPUs aborted when attempting a locked cycle. Address pipelining allows the next bus master to assert its address and status signals before the beginning of the data transfer phase of the next bus master. Split transactions allow a CPU posting a read to the EISA bus to arbitrate the host bus to another device without re-arbitrating for the host bus to retrieve the data. The data is asserted on the host bus when it is idle even if the host bus is being controlled by another device.

    DOUBLE BUFFERING OPERATIONS BETWEEN THE MEMORY BUS AND THE EXPANSION BUS OF A COMPUTER SYSTEM
    10.
    发明公开
    DOUBLE BUFFERING OPERATIONS BETWEEN THE MEMORY BUS AND THE EXPANSION BUS OF A COMPUTER SYSTEM 失效
    双缓冲操作存储器总线和扩展总线的计算机系统之间。

    公开(公告)号:EP0664030A1

    公开(公告)日:1995-07-26

    申请号:EP93924286.0

    申请日:1993-09-29

    CPC classification number: G06F13/1673 G06F12/0215 G06F13/4018

    Abstract: Double buffering operations to reduce host bus hold times when an expansion bus master is accessing the main memory on a host bus of a computer system. A system data buffer coupled between the main memory and the expansion bus includes 256-bit double read and write buffers. A memory controller coupled to the double read and write buffers and to the expansion bus includes primary and secondary address latches corresponding to the double buffers. The memory controller detects access to the main memory, compares the expansion bus address with the primary and secondary addresses and controls the double read and write buffers and the primary and secondary address latches accordingly. During write operations, data to be written to the same line of memory is written to a first of the double write buffers until a write occurs to an address to a different line before data is transferred to main memory. During read operations, a full line is loaded into a first of the double read buffers, and the next full line is retrieved into a second read buffer from main memory if a subsequent read hit occurs in the first read buffer.

Patent Agency Ranking