Scalable Infiniband Interconnect Performance and Diagnostic Tool
    101.
    发明申请
    Scalable Infiniband Interconnect Performance and Diagnostic Tool 有权
    可扩展Infiniband互连性能和诊断工具

    公开(公告)号:US20140269342A1

    公开(公告)日:2014-09-18

    申请号:US13843919

    申请日:2013-03-15

    Inventor: John Baron

    CPC classification number: H04L43/50 H04L41/0645 H04L41/0677 H04L41/12

    Abstract: In accordance with some implementations, a method for evaluating large scale computer systems based on performance is disclosed. A large scale, distributed memory computer system receives topology data, wherein the topology data describes the connections between the plurality of switches and lists the nodes associated with each switch. Based on the received topology data, the system performs a data transfer test for each of the pair of switches. The test includes transferring data between a plurality of nodes and determining a respective overall test result value reflecting overall performance of a respective pair of switches for a plurality of component tests. The system determines that the pair of switches meets minimum performance standards by comparing the overall test result value against an acceptable test value. If the overall test result value does not meet the minimum performance standards, the system reports the respective pair of switches as underperforming.

    Abstract translation: 根据一些实施方式,公开了一种基于性能来评估大规模计算机系统的方法。 大规模分布式存储器计算机系统接收拓扑数据,其中拓扑数据描述多个交换机之间的连接,并列出与每个交换机相关联的节点。 基于接收到的拓扑数据,系统对每对开关进行数据传输测试。 测试包括在多个节点之间传送数据,并且确定反映用于多个组件测试的相应的一对开关的整体性能的相应的总体测试结果值。 系统通过将总体测试结果值与可接受的测试值进行比较来确定该对开关满足最低性能标准。 如果总体测试结果值不符合最低性能标准,则系统会将相应的开关对报告为表现不佳。

    SYSTEM FOR COOLING MULTIPLE IN-LINE CENTRAL PROCESSING UNITS IN A CONFINED ENCLOSURE
    102.
    发明申请
    SYSTEM FOR COOLING MULTIPLE IN-LINE CENTRAL PROCESSING UNITS IN A CONFINED ENCLOSURE 审中-公开
    用于在一个确定的外壳中冷却多个在线中央处理单元的系统

    公开(公告)号:US20140268553A1

    公开(公告)日:2014-09-18

    申请号:US13931781

    申请日:2013-06-28

    Abstract: A system for cooling multiple in-line CPUs in a confined enclosure is provided. In an embodiment, the system may include a front CPU and a front heat sink that may be coupled to the front CPU. The front heat sink may have a plurality of fins and a corresponding fin pitch. The system may further include a rear CPU disposed in line with the front CPU and a rear heat sink coupled to the rear CPU. The rear heat sink may have a plurality of fins and a corresponding fin pitch. The fin pitch of the rear heat sink may be higher than the fin pitch of the front heat sink. In another embodiment, the front and rear heat sinks may be coupled together by one or more heat pipes.

    Abstract translation: 提供了一种用于冷却密闭外壳中的多个在线CPU的系统。 在一个实施例中,系统可以包括可以耦合到前CPU的前CPU和前散热器。 前散热片可以具有多个翅片和相应的翅片间距。 该系统可以进一步包括与前CPU一致地布置的后CPU和耦合到后CPU的后散热器。 后散热器可以具有多个翅片和相应的翅片间距。 后散热片的翅片间距可能高于前散热片的散热片间距。 在另一个实施例中,前后散热片可以由一个或多个热管联接在一起。

    ENCLOSURE HIGH PRESSURE PUSH-PULL AIRFLOW
    103.
    发明申请
    ENCLOSURE HIGH PRESSURE PUSH-PULL AIRFLOW 审中-公开
    外壳高压推拉空气流

    公开(公告)号:US20140268551A1

    公开(公告)日:2014-09-18

    申请号:US14038588

    申请日:2013-09-26

    CPC classification number: G11B33/128 G11B33/142 H05K7/20727

    Abstract: High pressure fans are mounted in the middle of an enclosure to create a low pressure zone and a high pressure zone within the enclosure. The high pressure fans pull air through high density sets of hard disk drives in the back of an enclosure and push air through high density disk drives in the front of the enclosure. Being positioned in the middle of an enclosure allows the high pressure fans to mix hot air pulled through the low pressure zone with cool air existing on the other side of the fans. The fans then push the cool mixed air through the next set of hard drives, forming a high pressure zone and allowing the air to exit at the front of the enclosure.

    Abstract translation: 高压风扇安装在外壳的中间,以在外壳内产生低压区和高压区。 高压风扇通过机柜背面的高密度硬盘驱动器吸入空气,并通过机箱前部的高密度磁盘驱动器推动空气。 位于外壳中间的高压风扇可以将通过低压区域拉出的热空气与存在于风扇另一侧的冷气混合。 风扇然后将冷却的混合空气推入下一组硬盘驱动器,形成高压区,并允许空气在外壳前部离开。

    TRANSACTIONAL MEMORY PROXY
    104.
    发明申请
    TRANSACTIONAL MEMORY PROXY 有权
    交易记忆代理

    公开(公告)号:US20140068201A1

    公开(公告)日:2014-03-06

    申请号:US14012783

    申请日:2013-08-28

    Inventor: Eric Fromm

    Abstract: Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today.

    Abstract translation: 计算节点卸载事务内存中的处理器访问寻址共享内存到事务内存代理。 事务内存代理通常驻留在特定计算节点中的处理器附近。 事务内存代理充当这些处理器的代理。 本发明的第一个优点包括使处理器与远程系统故障的直接影响相分离。 本发明的其他优点包括释放处理器不必意识到事务性存储器语义,并允许处理器寻址大于处理器本机硬件寻址能力的存储器空间。 本发明还使得计算机系统的事务能力能够远远超出目前那些找到的计算机系统的事务能力。

    REAL-TIME STORAGE AREA NETWORK
    105.
    发明申请
    REAL-TIME STORAGE AREA NETWORK 有权
    实时存储区域网络

    公开(公告)号:US20140032766A1

    公开(公告)日:2014-01-30

    申请号:US14042695

    申请日:2013-09-30

    Abstract: A cluster of computing systems is provided with guaranteed real-time access to data storage in a storage area network. Processes issue request for bandwidth reservation which are initially handled by a daemon on the same node as the requesting processes. The local daemon determines whether bandwidth is available and, if so, reserves the bandwidth in common hardware on the local node, then forwards requests for shared resources to a master daemon for the cluster. The master daemon makes similar determinations and reservations for resources shared by the cluster, including data storage elements in the storage area network and grants admission to the requests that don't exceed total available bandwidth.

    Abstract translation: 提供了一组计算系统,保证对存储区域网络中的数据存储进行实时访问。 处理最初由与请求进程在同一节点上的守护程序处理的带宽预留的发出请求。 本地守护程序确定带宽是否可用,如果是,则在本地节点上的普通硬件中保留带宽,然后将共享资源的请求转发到集群的主守护进程。 主守护进程对群集共享的资源进行类似的确定和预留,包括存储区域网络中的数据存储元素,并允许对不超过总可用带宽的请求。

    Shared-Credit Arbitration Circuit
    106.
    发明申请

    公开(公告)号:US20180159789A1

    公开(公告)日:2018-06-07

    申请号:US15370529

    申请日:2016-12-06

    CPC classification number: H04L47/527 H04L47/39 H04L49/70

    Abstract: This patent application relates generally to a shared-credit arbitration circuit for use in arbitrating access by a number of virtual channels to a shared resource managed by a destination (arbiter) based on credits allotted to each virtual channel, in which only the destination is aware of the availability of a shared pool of resources, and the destination selectively provides access to the shared pool by the virtual channels and returns credits to the source(s) associated with the virtual channels when shared resources are used so that the source(s) are unaware of the destination's use of the shared resources and are unhindered by the destination's use of shared resources. Among other things, this can significantly reduce the complexity of the source(s) and the required handshaking between the source(s) and the destination.

    SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR REMOTE GRAPHICS PROCESSING

    公开(公告)号:US20170236245A1

    公开(公告)日:2017-08-17

    申请号:US15442445

    申请日:2017-02-24

    Abstract: A system, method, and computer program product are provided for remote rendering of computer graphics. The system includes a graphics application program resident at a remote server. The graphics application is invoked by a user or process located at a client. The invoked graphics application proceeds to issue graphics instructions. The graphics instructions are received by a remote rendering control system. Given that the client and server differ with respect to graphics context and image processing capability, the remote rendering control system modifies the graphics instructions in order to accommodate these differences. The modified graphics instructions are sent to graphics rendering resources, which produce one or more rendered images. Data representing the rendered images is written to one or more frame buffers. The remote rendering control system then reads this image data from the frame buffers. The image data is transmitted to the client for display or processing. In an embodiment of the system, the image data is compressed before being transmitted to the client. In such an embodiment, the steps of rendering, compression, and transmission can be performed asynchronously in a pipelined manner.

    Temporal based collaborative mutual exclusion control of a shared resource

    公开(公告)号:US09686206B2

    公开(公告)日:2017-06-20

    申请号:US14265195

    申请日:2014-04-29

    CPC classification number: H04L47/722 G06F9/526

    Abstract: The present invention relates to a temporal base method of mutual exclusion control of a shared resource. The invention will usually be implemented by a plurality of host computers sharing a shared resource where each host computer will read a reservation memory that is associated with the shared resource. Typically a first host computer will perform and initial read of the reservation memory and when the reservation memory indicates that the shared resource is available, the first host computer will write to the reservation memory. After a time delay, the host computer will read the reservation memory again to determine whether it has won access to the resource. The first host computer may determine that it has won access to the shared resource by checking that data in the reservation memory includes an identifier corresponding to the first host computer.

    METHOD AND SYSTEM FOR SHARED DIRECT ACCESS STORAGE

    公开(公告)号:US20170139607A1

    公开(公告)日:2017-05-18

    申请号:US15353413

    申请日:2016-11-16

    Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.

Patent Agency Ranking