Abstract:
A server is implemented within disk drive device or other drive device. The server- drive device may be used within a server tray having many disk drive devices, along with multiple other server trays in a cabinet of trays. One or more disk drive devices may be implemented in a server tray. The server-drive device may also be used in other applications. By implementing the server within the disk drive, valuable space is saved in a computing device.
Abstract:
A method for computing eigenvectors and eigenvalues of a square matrix in a high performance computer involves dynamically reallocating the computer's computing cores for various phases of the computation process.
Abstract:
The present system enables more efficient I/O processing by providing a mechanism for maintaining data within the locality of reference. One or more accelerator modules may be implemented within a solid state storage device (SSD). The accelerator modules form a caching storage tier that can receive, store and reproduce data. The one or more accelerator modules may place data into the SSD or hard disk drives based on parameters associated with the data.
Abstract:
This innovation provides a method for a networked and replicated database management system (DBMS) using only one-sided remote direct memory access (RDMA). Replicated databases retain some access to the stored data in the face of server failure. In the prior state of the art, after the software in the DBMS on one of the servers acted on a client's request to update the database, it would contact the other replicas of the database and ensure that they had recorded the change, before responding to the client that the transaction was complete. This innovation describes a method whereby the database client directly interacts with each DBMS replica over the network using only RDMA to directly modify the stored data while maintaining the properties of database atomicity and consistency. This method reduces transactional latency by removing any need for the server DBMS software to respond to or forward requests for service.
Abstract:
A high performance computing system includes one or more blade enclosures having a cooling manifold and configured to hold a plurality of computing blades, and a plurality of computing blades in each blade enclosure with at least one computing blade including two computing boards. The system further includes two or more cooling plates with each cooling plate between two corresponding computing boards within the computing blade, and a fluid connection coupled to the cooling plate(s) and in fluid communication with the fluid cooling manifold.
Abstract:
The present disclosure relates to an apparatus and a method for cooling electronic components. An apparatus of the presently claimed invention includes a connector and an electronic component that plugs into the connector. The electronic component contacts a heat sink, where the heat sink moves in an upward direction as the electronic component is plugged into the connector. Soft thermal pads located between the heat sink and liquid cooling tubes/pipes compress as the heat sink moves upward. When compressed, the thermal pads contact the heat sink and the liquid cooling tubes/pipes. Heat is then transferred from the electronic component through the heat sink, through the thermal pads, through the coolant tubes, and into liquid contained within the liquid coolant tubes.
Abstract:
The present disclosure is directed to a configurable extension space for a computer server or node blade that has the ability to expand data storage or other functionality to a computer system while minimizing any disruption to computers in a data center when the functionality of a computer server or a node blade is extended. Apparatus consistent with the present disclosure may include multiple electronic assemblies where a first assembly resides deep within an enclosure to which an expansion module may be attached in an accessible expansion space. Such apparatus may also include liquid cooling.
Abstract:
A high performance computing system with a plurality of computing blades has at least one computing blade that includes one or more computing boards and two side rails disposed at either side of the computing board. Each side rail has a board alignment element configured to hold the computing board within the computing blade, so that a top of the computing board is coupled to, and adjacent to, a portion of the board alignment element.
Abstract:
An apparatus and method detects trapped data at an intermediate node in a network path between a source node and a destination node, and re-routes that data to a downstream intermediate node in the network path via an alternate network path. An apparatus and method may include a virtualized physical interface, and may redirect the trapped data through a system's packet switched network, or through a system's flit switched network.
Abstract:
A multiple channel data transfer system (10) includes a source (12) that generates data packets with sequence numbers for transfer over multiple request channels (14). Data packets are transferred over the multiple request channels (14) through a network (16) to a destination (18). The destination (18) re-orders the data packets received over the multiple request channels (14) into a proper sequence in response to the sequence numbers to facilitate data processing. The destination (18) provides appropriate reply packets to the source (12) over multiple response channels (20) to control the flow of data packets from the source (12).