Abstract:
A method of analysing streams of metric data from a plurality of data processing sources (2) in a parallel processing system (1), using a computer (6). Each stream includes time stamped data associated with the respective data processing source in respect of a given metric which is sampled at intervals. For each stream of data, a start time and an end time are identified. A normalized start time and a normalized end time are determined across all streams. Sampling points are specified between the normalized start time and the normalized end time. For each stream of data, the data is re-sampled at the specified sampling points. For each sampling point, the re-sampled data across all the streams of data is processed to determine a statistical derivative of the data. A report is displayed which represents the values of the statistical derivative as a function of time. The process may be applied to various different metrics, and the results displayed at the same time, on a common time axis.
Abstract:
Techniques of debugging a computing system are described herein. The techniques may include generating debug data at agents in the computing system. The techniques may include recording the debug data at a storage element, wherein the storage element is disposed in a non-core portion of the circuit interconnect accessible to the agents.
Abstract:
A first computing device is provided for rolling back a computing environment. The computing device includes processors configured to acquire a stream containing entries including snapshot entries, memory entries, and input/output entries wherein each entry includes information and is associated with a timestamp. The processors are further configured to receive a snapshot entry associated with a first timestamp, revert to a memory state using information provided in at least one memory entry associated with a timestamp after the first timestamp, and re-execute a previously executed process, wherein the re-execution of the process is started using the first timestamp, information from the received snapshot entry, and information for input/output operations corresponding to the input/output entries associated with timestamps after the first timestamp.
Abstract:
A primary storage controller receives an input/output (I/O) command from a host, wherein a host timestamp is associated with the I/O command. During a mirroring of storage volumes to a secondary storage controller, the primary storage controller communicates the host timestamp associated with the I/O command to the secondary storage controller, wherein mirrored copies of the storage volumes are timestamped based on at least the host timestamp and an elapsed time since a last host I/O command. A recovery is made from a failure of one or more of the storage volumes in the primary storage controller, by using the timestamped mirrored copies of the storage volumes.
Abstract:
A system, methods, and apparatus are provided for ensuring consistency of derived data, relative to primary data, in a distributed data storage system. Primary data and derived data are stored on and/or managed by separate components of the data storage system, such as different storage engines. Primary data are written and updated as specified in write requests, which may be queries directed at the primary storage engine. Results of primary data writes are delivered directly to the derived storage engine. If an update to derived data fails, a record is made; if the update succeeds, any recorded failed writes to the same data are cleared. The derived storage engine also receives write results via a change capture stream of events affecting the primary data, and can use these copies of write results to fix failed updates and to clear failures from the failed write records.
Abstract:
Embodiments profile usage of memory and other resource. Stack traces have lifespans, resource impacts, and constituent call chains. Aggregation unifies shared call chains and sums resource impacts after assigning traces to snapshot sets based on trace lifespans and user-defined snapshot request timestamps. Traces are assigned using either active aggregation or precursor aggregation. Traces spanning a snapshot request may be split. A sampled resource trace lifespan begins when the resource is sampled and ends at the next snapshot request. An allocated resource trace lifespan begins when a portion of the resource is allocated and ends when the allocated portion is freed. Resource portions not yet freed are implicitly freed when program execution ends. Call chain interval resource impact aggregation performed with multiple snapshot requests and stack trace sets creates snapshot aggregations. Two aggregations are differenced by subtracting the summed call chain resource impacts of one aggregation from those of another aggregation.
Abstract:
A data management system method of managing call data for at least one radio network element within a cellular communication network. The method comprises receiving call data for at least one call from the at least one radio network element within the cellular communication network, arranging the received call data into call data records, assembling the call data records into at least one data block, and writing the at least one data block to at least one data storage device. The method further comprises, upon receipt of a call data query, retrieving call data records from the at least one data storage device on a per data block basis.
Abstract:
Embodiments profile usage of memory and other resources. Stack traces have lifespans, resource impacts, and constituent call chains. Aggregation unifies shared call chains and sums resource impacts after assigning traces to snapshot sets based on trace lifespans and user-defined snapshot request timestamps. Traces are assigned using either active aggregation or precursor aggregation. Traces spanning a snapshot request may be split. A sampled resource trace lifespan begins when the resource is sampled and ends at the next snapshot request. An allocated resource trace lifespan begins when a portion of the resource is allocated and ends when the allocated portion is freed. Resource portions not yet freed are implicitly freed when program execution ends. Call chain interval resource impact aggregation performed with multiple snapshot requests and stack trace sets creates snapshot aggregations. Two aggregations are differenced by subtracting the summed call chain resource impacts of one aggregation from those of another aggregation.
Abstract:
Technologies are described herein for collecting client-side performance metrics and latencies. A web page received by a web browser application executing on a user computing device includes markup or scripting code that instructs the browser to collect performance measures during the rendering of the content of the web page. The performance measures may include operation timings that measure the time it takes for a particular operation to complete during the rendering of the content and/or event counters that count the number of times that a specific event occurs during the rendering of the content. The web browser application sends an event report containing the collected performance measures to a reporting module executing on a server computer. The reporting module receives the event report, validates the content of the event report, and adds the event report to a database or other data storage system.
Abstract:
An integrated circuit includes a trace subsystem that provides timestamps for events occurring in a trace source that does not natively support time stamping trace data. A timestamp inserter is coupled to such a trace source. The timestamp inserter generates a modified trace data stream by arranging a reference or references with the trace information from the trace source on a trace bus. A trace destination receives the modified trace data stream including the reference(s). In some embodiments, a timestamp inserter receives a timestamp request and stores a reference in a buffer. Upon later receipt of trace information associated with the request, the timestamp inserter inserts the reference, a current reference and the received trace information into the trace data stream.