Abstract:
A computer system with read/write access to storage devices creates a snapshot of a data volume at a point in time while continuing to accept access requests to the mirrored data volume by copying before making changes to the base data volume. Multiple snapshots may be made of the same data volume at different points in time. Only data that is not stored in a previous snapshot volume or in the base data volume are stored in the most recent snapshot volume.
Abstract:
A method for maintaining data coherency in a shared-memory computer system having a plurality of nodes divides the local memory of a given node into one or more blocks and stores a data record for each block indicating a plurality of node groups and a selection of the node groups. Each selected node group represents a number of nodes, and selected node groups represent at least one node that has requested access to the block. In response to receiving an access request from a requesting node that may or may not be in a selected node group, the method and system update the data record to indicate the correct selection. If the requesting node is not in any node group, the data record is adjusted to have new node groups, one of which represents the requesting node.
Abstract:
A system for establishing a primary master node in a computer system includes a plurality of nodes, each node configured with an update interval, a hierarchy of master nodes selected from the plurality of nodes, wherein the master nodes are configured to synchronize the plurality of nodes with a clock value by sending out its clock value when its update interval has expired, wherein each node resets its update interval when it receives the clock value, a primary master node selected from the hierarchy of master nodes based on its update interval, and at least one backup master node selected from the hierarchy of master nodes based on its update interval, the backup master node configured to become the primary master node when the plurality of nodes do not receive the clock value after a predetermined period of time has elapsed.
Abstract:
In a computing system, cache coherency is performed by selecting one of a plurality of coherency protocols for a first memory transaction. Each of the plurality of coherency protocols has a unique set of cache states that may be applied to cached data for the first memory transaction. Cache coherency is performed on appropriate caches in the computing system by applying the set of cache states of the selected one of the plurality of coherency protocols.
Abstract:
Processing transaction requests in a shared memory multi-processor computer network is described. A transaction request is received at a servicing agent from a requesting agent. The transaction request includes a request priority associated with a transaction urgency generated by the requesting agent. The servicing agent provides an assigned priority to the transaction request based on the request priority, and then compares the assigned priority to an existing service level at the servicing agent to determine whether to complete or reject the transaction request. A reply message from the servicing agent to the requesting agent is generated to indicate whether the transaction request was completed or rejected, and to provide reply fairness state data for rejected transaction requests.
Abstract:
A rack mounted computer system. In one variation the computer rack is configured for side-by-side placement of computers. In another variation, the computer rack includes flanges for supporting the placement of computer units within the rack. In another variation the computer rack is configured with retaining clips. In yet another variation, the computer rack is configured to receive computers with chassis that are adapted for side-by-side placement.
Abstract:
A system and method for conveying data include the capability to determine whether a transaction request credit has been received at a computer module, the transaction request credit indicating that at least a portion of a transaction request message may be sent. The system and method also include the capability to determine, of a transaction request message is to be sent, whether at least a portion of the transaction request message may be sent and to send the at least a portion of the transaction request message if it may be sent.
Abstract:
A system and method for interconnecting a plurality of processing element nodes within a scalable multiprocessor system is provided. Each processing element node includes at least one processor and memory. A scalable interconnect network includes physical communication links interconnecting the processing element nodes in a cluster. A first set of routers in the scalable interconnect network route messages between the plurality of processing element nodes. One or more metarouters in the scalable interconnect network route messages between the first set of routers so that each one of the routers in a first cluster is connected to all other clusters through one or more metarouters.
Abstract:
A system deploys visualization tools, business analytics software, and big data software in a multi-instance mode on a large, coherent shared memory many-core computing system. The single machine solution provides or high performance and scalability and may be implemented remotely as a large capacity server (i.e., in the cloud) or locally to a user. Most big data software running in a single instance mode has limitations in scalability when running on a many-core and large coherent shared memory system. A configuration and deployment technique using a multi-instance approach, which also includes visualization tools and business analytics software, maximizes system performance and resource utilization, reduces latency and provides scalability as needed, for end-user applications in the cloud.
Abstract:
Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.