Abstract:
A mechanism for detection and measurement of hardware-based processor latency is disclosed. A method of the invention includes issuing an instruction to stop all running instructions on one or more processors of a multi-core computing device, starting a latency measurement code loop on each of the one or more processors, wherein for each of the one or more processors the latency measurement code loop operates to sample a time stamp counter (TSC) for a first time reading and sample the TSC for a second time reading after a predetermined period of time, and determine whether a difference between the first and the second time readings represents a discontinuous time interval where an operating system (OS) of the computing device does not control the one or more processors.
Abstract:
Analyzing an application executing on a target device. An application may be executed on a target device. Low cost measurement may be gathered regarding the application executing on the target device. In response to a trigger, high cost measurement data may be gathered regarding the application executing on the target device. The high cost measurement data may include graphics commands provided by the application. The graphics commands and related information may be stored and provided to a host. The host may modify the graphics commands to perform experiments to determine performance issues of the application executing on the target device. The host may determine whether the performance is limited by the CPU or the GPU and may determine specific operations that are causing performance issues. The host may provide suggestions for overcoming the performance issues.
Abstract:
In-band commands may be associated with a particular consistency interval and may indicate requested actions to be performed for that consistency interval. An application may desire to perform actions, such as additional backup, snapshots, etc. on stored data, when that data is in a consistent state from the application's point of view. In order to ensure that the data is in a consistent state, a consistency interval may be created on demand. A node may request a consistency interval by sending a consistency request message to a consistency interval coordinator, which in turn, establishes the consistency interval with all nodes in the distributed environment. After sending all write requests for the consistency interval, the node may then send the command message. Command messages may be stored in consistency logs along with write requests and a replication target, or other device, may read both the write requests and the command message.
Abstract:
Example methods, apparatus and articles of manufacture to benchmark hardware and software are disclosed. A disclosed example method includes initiating a first thread to execute a set of instructions on a processor, initiating a second thread to execute the set of instructions on the processor, determining a first duration for the execution of the first thread, determining a second duration for the execution of the second thread, and determining a thread fairness value for the computer system based on the first duration and the second duration.
Abstract:
A method and system for achieving highly available, fault-tolerant execution of components in a distributed computing system, without requiring the writer of these components to explicitly write code (such as entity beans or database transactions) to make component state persistent. It is achieved by converting the intrinsically non-deterministic behavior of the distributed system to a deterministic behavior, thus enabling state recovery to be achieved by advantageously efficient checkpoint-replay techniques. The method comprises: adapting the execution environment for enabling message communication amongst and between the components; automatically associating a deterministic timestamp in conjunction with a message to be communicated from a sender component to a receiver component during program execution, the timestamp representative of estimated time of arrival of the message at a receiver component. At a component, tracking state of that component during program execution, and periodically checkpointing the state in a local storage device. Upon failure of a component, the component state is restored by recovering a recent stored checkpoint and re-executing the events occurring since the last checkpoint. The system is deterministic by repeating the execution of the receiving component by processing the messages in the same order as their associated timestamp.
Abstract:
Methods and apparatuses for backing up data to a database are provided. A specified data set to be backed up is broken down into a plurality of data blocks, each data block is associated with a data block digest, and the data blocks and associated data block digests are stored in the database. When one or more data blocks are subsequently changed, an update to the backup may be performed by adding to the backup data only the data blocks that have changed since the initial backup. Methods and apparatuses for restoring backup data from a database are also provided. Timestamp information associated with the data blocks in the database is used to select the data blocks to be restored.
Abstract:
Methods and systems are disclosed for measuring performance event rates at a computer and reporting the performance event rates using timelines. A particular method tracks, for a time period, the occurrences of a particular event at a computer. Event rates corresponding to different time segments within the time period are calculated, and the time segments are assigned colors based on their associated event rates. The event rates are used to display a colored timeline for the time period, including displaying a colored timeline portion for each time segment in its associated color.
Abstract:
Based on the time series data from multiple components, the systems administrator or other managing entity may desire to find the temporal dependencies between the different time series data over time. For example, based on actions indicated in time series data from two or more servers in a server network, a dependency structure may be determined which indicates a parent/child or dependent relationship between the two or more servers. In some cases, it may also be beneficial to predict the state of a child component, and/or predict the average time to a state change or event of a child component based on the parent time series data. These determinations and predications may reflect the logical connections between actions of components. The relationships and/or predictions may be expressed graphically and/or in terms of a probability distribution.
Abstract:
A method, system, and computer program product to preserve data integrity in a mirror and copy environment is disclosed herein. In one embodiment, a method may include receiving a write command and data from a host device. The method may further include writing the data to a primary storage device and attaching a primary sequence number associated with the primary storage device to the write command, thereby providing a numbered write command with a command sequence number. The numbered write command may then be transmitted to a secondary storage device. The method may further include comparing the command sequence number to a secondary sequence number associated with the secondary storage device. If the command sequence number matches the secondary sequence number, then the command may be executed. Otherwise, it may be ignored.
Abstract:
Architecture that reduces data loss resulting from failover in an asynchronous log shipping deployment, but leveraging mid-tier and frontend servers to fill in lost data. In an asynchronous log shipping operation, a replication component asynchronously replicates messaging data to a backend server in accordance with one or more replication operations, which can be updates to databases on the backend server. These databases can include messaging data, such as email address books, mailboxes, etc. A history component maintains a history of replication operations on a frontend server. In the event of a lossy failover, a replay component is used for replaying the replication operations from the history to the backend server.