Abstract:
The SAS expander PCBA is partitioned to allow SAS lanes to be externalized, allowing for a x4 wide port external access into each of the two primary SAS expander modules and each of the two secondary SAS expander modules. This configuration allows for a single host x8 external HBA connection to tunnel into the storage array by connecting into either the primary or secondary x4 wide ports. A second host may connect to the alternate connection if desired. This configuration may also lend itself to allow up to four hosts to access the internal SAS topology of the Enclosure. This configuration may also lend itself to connecting the primary and secondary SAS expander modules together in applications that require an SAS expander to see more storage device arrays than would normally be confined to a typical SAS expander module.
Abstract:
A midplane may include a printed circuit board (PCB) with a top surface and a bottom surface. A first plurality of midplane connectors may be disposed on one or more edges of the top surface. The first midplane connectors may have one or more pins that are longitudinally oriented parallel to the top surface of the PCB. The midplane may further include a second plurality of midplane connectors disposed on the top surface. The second midplane connectors may have one or more pins that are longitudinally oriented perpendicular to the top surface of the PCB.
Abstract:
In accordance with some implementations, a method for evaluating large scale computer systems based on performance is disclosed. A large scale, distributed memory computer system receives topology data, wherein the topology data describes the connections between the plurality of switches and lists the nodes associated with each switch. Based on the received topology data, the system performs a data transfer test for each of the pair of switches. The test includes transferring data between a plurality of nodes and determining a respective overall test result value reflecting overall performance of a respective pair of switches for a plurality of component tests. The system determines that the pair of switches meets minimum performance standards by comparing the overall test result value against an acceptable test value. If the overall test result value does not meet the minimum performance standards, the system reports the respective pair of switches as underperforming.
Abstract:
A system for cooling multiple in-line CPUs in a confined enclosure is provided. In an embodiment, the system may include a front CPU and a front heat sink that may be coupled to the front CPU. The front heat sink may have a plurality of fins and a corresponding fin pitch. The system may further include a rear CPU disposed in line with the front CPU and a rear heat sink coupled to the rear CPU. The rear heat sink may have a plurality of fins and a corresponding fin pitch. The fin pitch of the rear heat sink may be higher than the fin pitch of the front heat sink. In another embodiment, the front and rear heat sinks may be coupled together by one or more heat pipes.
Abstract:
High pressure fans are mounted in the middle of an enclosure to create a low pressure zone and a high pressure zone within the enclosure. The high pressure fans pull air through high density sets of hard disk drives in the back of an enclosure and push air through high density disk drives in the front of the enclosure. Being positioned in the middle of an enclosure allows the high pressure fans to mix hot air pulled through the low pressure zone with cool air existing on the other side of the fans. The fans then push the cool mixed air through the next set of hard drives, forming a high pressure zone and allowing the air to exit at the front of the enclosure.
Abstract:
Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today.
Abstract:
A cluster of computing systems is provided with guaranteed real-time access to data storage in a storage area network. Processes issue request for bandwidth reservation which are initially handled by a daemon on the same node as the requesting processes. The local daemon determines whether bandwidth is available and, if so, reserves the bandwidth in common hardware on the local node, then forwards requests for shared resources to a master daemon for the cluster. The master daemon makes similar determinations and reservations for resources shared by the cluster, including data storage elements in the storage area network and grants admission to the requests that don't exceed total available bandwidth.
Abstract:
This patent application relates generally to a shared-credit arbitration circuit for use in arbitrating access by a number of virtual channels to a shared resource managed by a destination (arbiter) based on credits allotted to each virtual channel, in which only the destination is aware of the availability of a shared pool of resources, and the destination selectively provides access to the shared pool by the virtual channels and returns credits to the source(s) associated with the virtual channels when shared resources are used so that the source(s) are unaware of the destination's use of the shared resources and are unhindered by the destination's use of shared resources. Among other things, this can significantly reduce the complexity of the source(s) and the required handshaking between the source(s) and the destination.
Abstract:
A system, method, and computer program product are provided for remote rendering of computer graphics. The system includes a graphics application program resident at a remote server. The graphics application is invoked by a user or process located at a client. The invoked graphics application proceeds to issue graphics instructions. The graphics instructions are received by a remote rendering control system. Given that the client and server differ with respect to graphics context and image processing capability, the remote rendering control system modifies the graphics instructions in order to accommodate these differences. The modified graphics instructions are sent to graphics rendering resources, which produce one or more rendered images. Data representing the rendered images is written to one or more frame buffers. The remote rendering control system then reads this image data from the frame buffers. The image data is transmitted to the client for display or processing. In an embodiment of the system, the image data is compressed before being transmitted to the client. In such an embodiment, the steps of rendering, compression, and transmission can be performed asynchronously in a pipelined manner.
Abstract:
A cluster of computer system nodes connected by a storage area network include two classes of nodes. The first class of nodes can act as clients or servers, while the other nodes can only be clients. The client-only nodes require much less functionality and can be more easily supported by different operating systems. To minimize the amount of data transmitted during normal operation, the server responsible for maintaining a cluster configuration database repeatedly multicasts the IP address, its incarnation number and the most recent database generation number. Each node stores this information and when a change is detected, each node can request an update of the data needed by that node. A client-only node uses the IP address of the server to connect to the server, to download the information from the cluster database required by the client-only node and to upload local disk connectivity information.