Layered speculative request unit with instruction optimized and storage hierarchy optimized partitions
    32.
    发明授权
    Layered speculative request unit with instruction optimized and storage hierarchy optimized partitions 失效
    分层推测请求单元,具有指令优化和存储层次结构优化分区

    公开(公告)号:US06496921B1

    公开(公告)日:2002-12-17

    申请号:US09345643

    申请日:1999-06-30

    Abstract: A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster).

    Abstract translation: 一种操作计算机系统的处理单元的方法,通过从指令序列单元向处理单元的预取单元发出具有显式预取请求的指令。 本发明适用于作为操作数数据或指令的值。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。

    Processor assigning data to hardware partition based on selectable hash of data address
    33.
    发明授权
    Processor assigning data to hardware partition based on selectable hash of data address 失效
    处理器根据数据地址的可选哈希分配数据到硬件分区

    公开(公告)号:US06470442B1

    公开(公告)日:2002-10-22

    申请号:US09364286

    申请日:1999-07-30

    Abstract: A processor includes execution resources, data storage, and an instruction sequencing unit, coupled to the execution resources and the data storage, that supplies instructions within the data storage to the execution resources. At least one of the execution resources, the data storage, and the instruction sequencing unit is implemented with a plurality of hardware partitions of like function for processing data. The data processed by each hardware partition is assigned according to a selectable hash of addresses associated as with the data. In a preferred embodiment, the selectable hash can be altered dynamically during the operation of the processor, for example, in response to detection of an error or a load imbalance between the hardware partitions.

    Abstract translation: 处理器包括执行资源,数据存储和指令排序单元,其耦合到执行资源和数据存储器,其将数据存储器内的指令提供给执行资源。 执行资源,数据存储和指令排序单元中的至少一个用多个用于处理数据的相同功能的硬件分区来实现。 由每个硬件分区处理的数据根据​​与数据相关联的地址的可选择的散列来分配。 在优选实施例中,可以在处理器的操作期间动态地改变可选择的散列,例如响应于硬件分区之间的错误或负载不平衡的检测。

    Method for upper level cache victim selection management by a lower level cache
    34.
    发明授权
    Method for upper level cache victim selection management by a lower level cache 失效
    低级缓存的上级缓存受害者选择管理方法

    公开(公告)号:US06446166B1

    公开(公告)日:2002-09-03

    申请号:US09340073

    申请日:1999-06-25

    CPC classification number: G06F12/0811 G06F12/0897 G06F12/12

    Abstract: A method of improving memory access for a computer system, by sending load requests to a lower level storage subsystem along with associated information pertaining to intended use of the requested information by the requesting processor, without using a high level load queue. Returning the requested information to the processor along with the associated use information allows the information to be placed immediately without using reload buffers. A register load bus separate from the cache load bus (and having a smaller granularity) is used to return the information. An upper level (L1) cache may then be imprecisely reloaded (the upper level cache can also be imprecisely reloaded with store instructions). The lower level (L2) cache can monitor L1 and L2 cache activity, which can be used to select a victim cache block in the L1 cache (based on the additional L2 information), or to select a victim cache block in the L2 cache (based on the additional L1 information). L2 control of the L1 directory also allows certain snoop requests to be resolved without waiting for L1 acknowledgement. The invention can be applied to, e.g., instruction, operand data and translation caches.

    Abstract translation: 一种改进计算机系统的存储器访问的方法,通过将请求发送到较低级别的存储子系统以及由请求处理器对与请求的信息的预期用途有关的关联信息而不使用高级别的负载队列来进行发送。 将所请求的信息与相关联的使用信息一起返回到处理器允许立即放置信息而不使用重新加载缓冲器。 使用与缓存负载总线分离(并具有较小粒度)的寄存器负载总线返回信息。 然后可能不精确地重新加载上级(L1)高速缓存(高级缓存也可以不精确地用存储指令重新加载)。 低级(L​​2)缓存可以监视L1和L2高速缓存活动,其可用于在L1高速缓存中选择受害者缓存块(基于附加的L2信息),或者选择L2缓存中的受害缓存块( 基于附加的L1信息)。 L1目录的L2控制也允许解决某些侦听请求,而无需等待L1确认。 本发明可以应用于例如指令,操作数数据和翻译高速缓存。

    Common bulkhead cryogenic propellant tank
    35.
    发明授权
    Common bulkhead cryogenic propellant tank 有权
    普通舱壁低温推进剂罐

    公开(公告)号:US06422514B1

    公开(公告)日:2002-07-23

    申请号:US09626251

    申请日:2000-07-26

    CPC classification number: B64G1/402 B64G1/14 Y10S220/901

    Abstract: The present invention discloses a novel fuel structure for housing and delivering disparate cryogenic fuels to combustion zones in an aerospace vehicle. The tank comprises a plurality of containers having volumes that are separated by common wall bulkheads and which are arranged substantially side-by-side in conformance with the interior of the aerospace vehicle. A tank support structure positioned within the vehicle interior includes lengthwise supports as well as cross-wise supports, with the latter including openings within which the rear ends of the containers are supported. Fuel from the containers is delivered to the vehicle's combustion system via appropriate fuel lines carried by dome shaped end caps at the rear ends of the containers.

    Abstract translation: 本发明公开了一种新颖的燃料结构,用于将不同的低温燃料储存并输送到航空航天车辆的燃烧区。 容器包括多个容器,容器具有由共同的壁壁隔开的体积,并且其大致并排布置成符合航空航天车辆的内部。 定位在车辆内部的油箱支撑结构包括纵向支撑件和横向支撑件,后者包括支撑容器后端的开口。 来自容器的燃料通过在容器的后端由圆顶形端盖承载的合适的燃料管线输送到车辆的燃烧系统。

    Cache allocation policy based on speculative request history
    36.
    发明授权
    Cache allocation policy based on speculative request history 有权
    基于推测请求历史记录的缓存分配策略

    公开(公告)号:US06421762B1

    公开(公告)日:2002-07-16

    申请号:US09345713

    申请日:1999-06-30

    CPC classification number: G06F12/0862 G06F12/0897 G06F12/126 G06F2212/6024

    Abstract: A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value. The prefetch limit of cache usage may be established with a maximum number of sets in a congruence class usable by the requesting processing unit. A flag in a directory of the cache may be set to indicate that the prefetch value was retrieved as the result of a prefetch operation. In the implementation wherein the cache is a multi-level cache, a second flag in the cache directory may be set to indicate that the prefetch value has been sourced to an upstream cache. A cache line containing prefetch data can be automatically invalidated after a preset amount of time has passed since the prefetch value was requested.

    Abstract translation: 一种操作计算机系统的处理单元的方法,通过从指令序列单元向处理单元的预取单元发出具有显式预取请求的指令。 本发明适用于作为操作数数据或指令的值。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器层次结构请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已经被高速缓存满足,则分配包含较早预取值之一的高速缓存行中的高速缓存行用于接收另一个预取 值。 高速缓存使用的预取限制可以由请求处理单元可用的同余类中的最大数量的集合来建立。 高速缓存目录中的标志可以被设置为指示作为预取操作的结果检索预取值。 在其中缓存是多级高速缓存的实现中,高速缓存目录中的第二标志可以被设置为指示预取值已经被提供给上游高速缓存。 包含预取数据的缓存行可以在从请求预取值开始经过预设的时间后自动失效。

    Layered local cache mechanism with split register load bus and cache load bus
    38.
    发明授权
    Layered local cache mechanism with split register load bus and cache load bus 有权
    分层本地缓存机制,具有分裂寄存器负载总线和缓存负载总线

    公开(公告)号:US06405285B1

    公开(公告)日:2002-06-11

    申请号:US09340076

    申请日:1999-06-25

    CPC classification number: G06F9/30043 G06F9/3802 G06F9/3824 G06F12/0897

    Abstract: A method of improving memory access for a computer system, by sending load requests to a lower level storage subsystem along with associated information pertaining to intended use of the requested information by the requesting processor, without using a high level load queue. Returning the requested information to the processor along with the associated use information allows the information to be placed immediately without using reload buffers. A register load bus separate from the cache load bus (and having a smaller granularity) is used to return the information. An upper level (L1) cache may then be imprecisely reloaded (the upper level cache can also be imprecisely reloaded with store instructions). The lower level (L2) cache can monitor L1 and L2 cache activity, which can be used to select a victim cache block in the L1 cache (based on the additional L2 information), or to select a victim cache block in the L2 cache (based on the additional L1 information). L2 control of the L1 directory also allows certain snoop requests to be resolved without waiting for L1 acknowledgement. The invention can be applied to, e.g., instruction, operand data and translation caches.

    Abstract translation: 一种改进计算机系统的存储器访问的方法,通过将请求发送到较低级别的存储子系统以及由请求处理器对与请求的信息的预期用途有关的关联信息而不使用高级别的负载队列来进行发送。 将所请求的信息与相关联的使用信息一起返回到处理器允许立即放置信息而不使用重新加载缓冲器。 使用与缓存负载总线分离(并具有较小粒度)的寄存器负载总线返回信息。 然后可能不精确地重新加载上级(L1)高速缓存(高级缓存也可以不精确地用存储指令重新加载)。 低级(L​​2)缓存可以监视L1和L2高速缓存活动,其可用于在L1高速缓存中选择受害者缓存块(基于附加的L2信息),或者选择L2缓存中的受害缓存块( 基于附加的L1信息)。 L1目录的L2控制也允许解决某些侦听请求,而无需等待L1确认。 本发明可以应用于例如指令,操作数数据和翻译高速缓存。

    High performance store instruction management via imprecise local cache update mechanism
    39.
    发明授权
    High performance store instruction management via imprecise local cache update mechanism 失效
    高性能存储指令管理通过不精确的本地缓存更新机制

    公开(公告)号:US06397300B1

    公开(公告)日:2002-05-28

    申请号:US09340078

    申请日:1999-06-25

    CPC classification number: G06F12/0897 G06F12/0875

    Abstract: A method of improving memory access for a computer system, by sending load requests to a lower level storage subsystem along with associated information pertaining to intended use of the requested information by the requesting processor, without using a high level load queue. Returning the requested information to the processor along with the associated use information allows the information to be placed immediately without using reload buffers. A register load bus separate from the cache load bus (and having a smaller granularity) is used to return the information. An upper level (L1) cache may then be imprecisely reloaded (the upper level cache can also be imprecisely reloaded with store instructions). The lower level (L2) cache can monitor L1 and L2 cache activity, which can be used to select a victim cache block in the L1 cache (based on the additional L2 information), or to select a victim cache block in the L2 cache (based on the additional L1 information). L2 control of the L1 directory also allows certain snoop requests to be resolved without waiting for L1 acknowledgement. The invention can be applied to, e.g., instruction, operand data and translation caches.

    Abstract translation: 一种改进计算机系统的存储器访问的方法,通过将请求发送到较低级别的存储子系统以及由请求处理器对与请求的信息的预期用途有关的关联信息而不使用高级别的负载队列来进行发送。 将所请求的信息与相关联的使用信息一起返回到处理器允许立即放置信息而不使用重新加载缓冲器。 使用与缓存负载总线分离(并具有较小粒度)的寄存器负载总线返回信息。 然后可能不精确地重新加载上级(L1)高速缓存(高级缓存也可以不精确地用存储指令重新加载)。 低级(L​​2)缓存可以监视L1和L2高速缓存活动,其可用于在L1高速缓存中选择受害者缓存块(基于附加的L2信息),或者选择L2缓存中的受害缓存块( 基于附加的L1信息)。 L1目录的L2控制也允许解决某些侦听请求,而无需等待L1确认。 本发明可以应用于例如指令,操作数数据和翻译高速缓存。

    High performance load instruction management via system bus with explicit register load and/or cache reload protocols
    40.
    发明授权
    High performance load instruction management via system bus with explicit register load and/or cache reload protocols 有权
    通过具有显式寄存器加载和/或缓存重新加载协议的系统总线进行高性能加载指令管理

    公开(公告)号:US06385694B1

    公开(公告)日:2002-05-07

    申请号:US09340079

    申请日:1999-06-25

    CPC classification number: G06F12/0859

    Abstract: A method of improving memory access for a computer system, by sending load requests to a lower level storage subsystem along with associated information pertaining to intended use of the requested information by the requesting processor, without using a high level load queue. Returning the requested information to the processor along with the associated use information allows the information to be placed immediately without using reload buffers. A register load bus separate from the cache load bus (and having a smaller granularity) is used to return the information. An upper level (L1) cache may then be imprecisely reloaded (the upper level cache can also be imprecisely reloaded with store instructions). The lower level (L2) cache can monitor L1 and L2 cache activity, which can be used to select a victim cache block in the L1 cache (based on the additional L2 information), or to select a victim cache block in the L2 cache (based on the additional L1 information). L2 control of the L1 directory also allows certain snoop requests to be resolved without waiting for L1 acknowledgement. The invention can be applied to, e.g., instruction, operand data and translation caches.

    Abstract translation: 一种改进计算机系统的存储器访问的方法,通过将请求发送到较低级别的存储子系统以及由请求处理器对与请求的信息的预期用途有关的关联信息而不使用高级别的负载队列来进行发送。 将所请求的信息与相关联的使用信息一起返回到处理器允许立即放置信息而不使用重新加载缓冲器。 使用与缓存负载总线分离(并具有较小粒度)的寄存器负载总线返回信息。 然后可能不精确地重新加载上级(L1)高速缓存(高级缓存也可以不精确地用存储指令重新加载)。 低级(L​​2)缓存可以监视L1和L2高速缓存活动,其可用于在L1高速缓存中选择受害者缓存块(基于附加的L2信息),或者选择L2缓存中的受害缓存块( 基于附加的L1信息)。 L1目录的L2控制也允许解决某些侦听请求,而无需等待L1确认。 本发明可以应用于例如指令,操作数数据和翻译高速缓存。

Patent Agency Ranking