Abstract:
A two part process is used for modifying records to be written and retrieved from tape devices. A record is appended with a cyclic redundancy check and a string of zeros. Submitting the entire record to tape drives which are logical block protection enabled will result in no change. For drives that are not LBP enabled, the string of zeros at the end of the record is removed. In addition to determining whether a drive is LBP compliant, a determination may be made as to whether a drive is a linear tape open drive from a particular manufacturer. Linear tape open drives may behave similarly as drives which may not be enabled with logical block protection.
Abstract:
A high performance computing (HPC) system includes computing blades having a first region that includes computing circuit boards having processors for performing a computation, and a second region that includes non-volatile memory for use in performing the computation. The regions are connected by a plurality of power connectors that convey power from the computing circuit boards to the memory, and a plurality of data connectors that convey data between the first and second regions. The power and data connectors are configured redundantly so that failure of a computing circuit board, a power connector, or a data connector does not interrupt the computation. A method of performing such a computation, and a computer program product implementing the method, are also disclosed.
Abstract:
Virtual storage pool creation is simplified by allowing a user to specify what devices to include in virtual storage pool by physical location. The virtual storage pool may be automatically generated based on the simplified user specifications. The user may specify the virtual pool configuration in a configuration file. A configuration application generates the virtual storage pool based on the configuration file. The configuration application utilizes the physical locations of block devices contained in the configuration file to generate the pool. As a result, virtual pool configuration and creation is automated, more efficient and is less error prone than previous methods that involve manually linking between physical device locations and computer generated names.
Abstract:
An algorithm for mapping memory and a method for using a high performance computing (“HPC”) system are disclosed. The algorithm takes into account the number of physical nodes in the HPC system, and the amount of memory in each node. Some of the nodes in the HPC system also include input/output (“I/O”) devices like graphics cards and non-volatile storage interfaces that have on-board memory; the algorithm also accounts for the number of such nodes and the amount of I/O memory they each contain. The algorithm maximizes certain parameters in priority order, including the number of mapped nodes, the number of mapped I/O nodes, the amount of mapped I/O memory, and the total amount of mapped memory.
Abstract:
A multi-tiered data management system utilizes vertical storage tiers, each with one or more horizontal data storage elements, to provide a dynamic and configurable system for managing the storing, archiving and retrieval of data. The system provides an ability to automatically copy data in parallel to multiple types of storage systems horizontally within a tier and vertically between tiers transparently from the host system or user perspective. Users may decide how many backend systems would be utilized and managed, and provide information to define rules or policies for the movement of data into, and among, and from the backend systems and tiers of storage devices. Data is managed by these set policies and determines how long the data will stay in each medium, be migrated between mediums, and otherwise managed. When a user retrieves data, the present system determines which data storage source would best suit the user's request.
Abstract:
Embodiments of the present invention perform a method for reading data from, writing data to, powering on, or configuring a block device without the kernel translating a file system operation into a block device operation. This is implemented by a using a core module to couple applications running in user space to a character device through a character device driver, the core module configures the character device to communicate with a block device through a block device driver without the kernel translating a file system command into a block device command.
Abstract:
In accordance with one embodiment of the invention, a method of providing performance data for nodes in a high performance computing system receives a request for performance data for a node in the high performance computing system. According to the method, a driver in kernel space causes the performance data for the node to be stored in kernel memory. The kernel memory is accessible in userspace via a first system file.
Abstract:
A server provides for improved cooling using one or more baffles. The baffles allow for increased cooling efficiencies by directing heat in such a manner as to reduce heat exposure for temperature sensitive hardware and data center employees. The baffle may be disposed within a server and direct hot air through the server away from temperature sensitive devices. The baffle may include an inlet that receives hot air and an outlet through which hot air may exit. One or more fans may be used to direct air through the baffle. For example, the baffle may direct heat from the baffle inlet to the baffle outlet, directing heat away from temperature sensitive devices within the server.
Abstract:
A server includes a tray that has a front portion and a back portion. A motherboard is disposed in the front portion of the tray and the motherboard is coupled to a heat sink. A fan is disposed in the back portion of the tray. A hard drive is disposed between the motherboard and the fan and the hard drive is operatively connected to the motherboard. The server also includes a heat pipe that has a body longitudinally bounded by an inlet and an outlet. The inlet is coupled to the heat sink, while the outlet is coupled to the fan. The body of the heat pipe extends past the hard drive. A power supply is also disposed in the tray and is operatively connected to the motherboard, the fan, and the hard drive.
Abstract:
A toolless hot-swappable storage module system includes a base plate for mounting within a computer enclosure and a toolless hot-swappable storage module. The storage module includes a sled that is removably coupled to the base plate. The storage module further includes a printed circuit board (PCB) that is disposed on the sled. The PCB includes a plurality of storage media connectors, a PCB signal and power connectors. The storage module also includes a support frame disposed on the PCB. The support frame includes a plurality of support members that are disposed perpendicular to the PCB. Each support member has a first edge and a second edge and includes a plurality of dividers disposed in parallel rows. The support frame also includes a sidewall that is disposed across the first edge of the support members.