Abstract:
A computer system with read/write access to storage devices creates a snapshot of a data volume at a point in time while continuing to accept access requests to the mirrored data volume by copying before making changes to the base data volume. Multiple snapshots may be made of the same data volume at different points in time. Only data that is not stored in a previous snapshot volume or in the base data volume are stored in the most recent snapshot volume.
Abstract:
Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.
Abstract:
A computer system with read/write access to storage devices creates a snapshot of a data volume at a point in time while continuing to accept access requests to the mirrored data volume by copying before making changes to the base data volume. Multiple snapshots may be made of the same data volume at different points in time. Only data that is not stored in a previous snapshot volume or in the base data volume are stored in the most recent snapshot volume.
Abstract:
A method for maintaining data coherency in a shared-memory computer system having a plurality of nodes divides the local memory of a given node into one or more blocks and stores a data record for each block indicating a plurality of node groups and a selection of the node groups. Each selected node group represents a number of nodes, and selected node groups represent at least one node that has requested access to the block. In response to receiving an access request from a requesting node that may or may not be in a selected node group, the method and system update the data record to indicate the correct selection. If the requesting node is not in any node group, the data record is adjusted to have new node groups, one of which represents the requesting node.
Abstract:
A system and method for conveying data include the capability to determine whether a transaction request credit has been received at a computer module, the transaction request credit indicating that at least a portion of a transaction request message may be sent. The system and method also include the capability to determine, of a transaction request message is to be sent, whether at least a portion of the transaction request message may be sent and to send the at least a portion of the transaction request message if it may be sent.
Abstract:
The present disclosure relates to an apparatus and a method for cooling electronic components. An apparatus of the presently claimed invention includes a connector and an electronic component that plugs into the connector. The electronic component contacts a heat sink, where the heat sink moves in an upward direction as the electronic component is plugged into the connector. Soft thermal pads located between the heat sink and liquid cooling tubes/pipes compress as the heat sink moves upward. When compressed, the thermal pads contact the heat sink and the liquid cooling tubes/pipes. Heat is then transferred from the electronic component through the heat sink, through the thermal pads, through the coolant tubes, and into liquid contained within the liquid coolant tubes.
Abstract:
The present disclosure is directed to a configurable extension space for a computer server or node blade that has the ability to expand data storage or other functionality to a computer system while minimizing any disruption to computers in a data center when the functionality of a computer server or a node blade is extended. Apparatus consistent with the present disclosure may include multiple electronic assemblies where a first assembly resides deep within an enclosure to which an expansion module may be attached in an accessible expansion space.
Abstract:
A system for deploying big data software in a multi-instance node. The optimal CPU memory and core configuration for a single instance database is determined. After determining an optimal core-memory ratio for a single instance execution, the software is deployed in multi-instance mode on single machine by applying the optimal core-memory ratio for each of the instances. The multi-instance database may then be deployed and data may be loaded in parallel for the instances.
Abstract:
A cluster of computer system nodes share direct read/write access to storage devices via a storage area network using a cluster filesystem. At least one trusted metadata server assigns a mandatory access control label as an extended attribute of each filesystem object regardless of whether required by a client node accessing the filesystem object. The mandatory access control label indicates the sensitivity and integrity of the filesystem object and is used by the trusted metadata server(s) to control access to the filesystem object by all client nodes.
Abstract:
A system deploys visualization tools, business analytics software, and big data software in a multi-instance mode on a large, coherent shared memory many-core computing system. The single machine solution provides or high performance and scalability and may be implemented remotely as a large capacity server (i.e., in the cloud) or locally to a user. Most big data software running in a single instance mode has limitations in scalability when running on a many-core and large coherent shared memory system. A configuration and deployment technique using a multi-instance approach, which also includes visualization tools and business analytics software, maximizes system performance and resource utilization, reduces latency and provides scalability as needed, for end-user applications in the cloud.