Abstract:
A multiple channel data transfer system (10) includes a source (12) that generates data packets with sequence numbers for transfer over multiple request channels (14). Data packets are transferred over the multiple request channels (14) through a network (16) to a destination (18). The destination (18) re-orders the data packets received over the multiple request channels (14) into a proper sequence in response to the sequence numbers to facilitate data processing. The destination (18) provides appropriate reply packets to the source (12) over multiple response channels (20) to control the flow of data packets from the source (12).
Abstract:
A system and method for interconnecting a plurality of processing element nodes within a scalable multiprocessor system is provided. Each processing element node includes at least one processor and memory. A scalable interconnect network includes physical communication links interconnecting the processing element nodes in a cluster. A first set of routers in the scalable interconnect network route messages between the plurality of processing element nodes. One or more metarouters in the scalable interconnect network route messages between the first set of routers so that each one of the routers in a first cluster is connected to all other clusters through one or more metarouters.
Abstract:
Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.
Abstract:
A high performance computing (HPC) system includes computing blades having a first region that includes processors for performing a computation, and a second region that includes non-volatile memory for use in performing the computation and another computing processor for performing data movement and storage. Because data movement and storage are offloaded to the secondary processor, the processors for performing the computation are not interrupted to perform these tasks. A method for use in the HPC system receives instructions in the computing processors and first data in the memory. The method includes receiving second data into the memory while continuing to execute the instructions in the computing processors, without interruption. A computer program product implementing the method is also disclosed.
Abstract:
A system, method, and computer program product are provided for remote rendering of computer graphics. The system includes a graphics application program resident at a remote server. The graphics application is invoked by a user or process located at a client. The invoked graphics application proceeds to issue graphics instructions. The graphics instructions are received by a remote rendering control system. Given that the client and server differ with respect to graphics context and image processing capability, the remote rendering control system modifies the graphics instructions in order to accommodate these differences. The modified graphics instructions are sent to graphics rendering resources, which produce one or more rendered images. Data representing the rendered images is written to one or more frame buffers. The remote rendering control system then reads this image data from the frame buffers. The image data is transmitted to the client for display or processing. In an embodiment of the system, the image data is compressed before being transmitted to the client. In such an embodiment, the steps of rendering, compression, and transmission can be performed asynchronously in a pipelined manner.
Abstract:
A high performance computing system includes one or more blade enclosures having a cooling manifold and configured to hold a plurality of computing blades, and a plurality of computing blades in each blade enclosure with at least one computing blade including two computing boards. The system further includes two or more cooling plates with each cooling plate between two corresponding computing boards within the computing blade, and a fluid connection coupled to the cooling plate(s) and in fluid communication with the fluid cooling manifold.
Abstract:
In an embodiment, a micro ethernet connector includes an outer housing that has a recessed front end and a back end. The micro ethernet connector further includes an inner housing that is disposed within the recessed front end of the outer housing. The inner housing has an exposed end. The exposed end includes a recessed channel. The volume of the recessed channel is substantially equal to the volume of a correspondingly shaped protruding printed circuit board of a male micro ethernet connector. A plurality of spring-biased connectors are disposed within the recessed channel of the inner housing.
Abstract:
A computer system with read/write access to storage devices creates a snapshot of a data volume at a point in time while continuing to accept access requests to the mirrored data volume by copying before making changes to the base data volume. Multiple snapshots may be made of the same data volume at different points in time. Only data that is not stored in a previous snapshot volume or in the base data volume are stored in the most recent snapshot volume.
Abstract:
Embodiments of the present invention perform a method for reading data from, writing data to, powering on, or configuring a block device without the kernel translating a file system operation into a block device operation. This is implemented by a using a core module to couple applications running in user space to a character device through a character device driver, the core module configures the character device to communicate with a block device through a block device driver without the kernel translating a file system command into a block device command.
Abstract:
A high performance computing system is provided with an ASIC that communicates with another device in the system according to a protocol defined by the other device. The ASIC is coupled to a reconfigurable protocol table, in the form of a high speed content-addressable memory (“CAM”). The CAM includes instructions to control the execution of the protocol by the ASIC. The CAM may include instructions to control the ASIC in the event that unanticipated signals or other errors are encountered while executing the protocol. Internal ASIC state data may be routed to the CAM to permit the ASIC to generate a reasonable response to errors either in the design or fabrication of the ASIC or the device with which it is communicating.