Abstract:
An application server may be instrumented to provide a resource measurement framework to collect resource usage data regarding request processing by the application server and applications executing on the application server. The resource measurement framework of an application server may collect hardware and software resource usage data regarding request processing at interception points located at interfaces between application components and services or other components of the application server by instrumenting those interfaces. The resource measurement framework may collect resource usage by instrumenting standard interfaces and/or methods of various specifications, such as implemented by containers or other components of the application server. Thus, the resource measurement framework may collect resource usage for applications or application components that do not include any resource measuring capabilities. The collected resource usage data may be parsed and combined to create an overall characterization of resource usage corresponding to the application server's request processing.
Abstract:
Methods and apparatus relating to a replacement policy for hot code detection are described. In some embodiments, it may be determined which entry amongst a plurality of entries stored in storage unit is to be replaced next. The entries may correspond to hot code and may store age and execution frequency information corresponding to the hot code. Other embodiments are also described and claimed.
Abstract:
An embodiment of a method of controlling access to a computing resource within a shared computing environment begins with a first step of measuring performance parameters for workloads accessing the computing resource to determine a performance parameter vector for the workloads. The method continues with a second step of estimating a controller function for the computing resource by analysis of recent performance parameters and recent throughputs. The controller function comprises a mathematical operation which takes an input and provides an output. In a third step, the method compares the performance parameter vector to a reference performance parameter vector to determine an error parameter. In a fourth step, the method applies the controller function to the error parameter to determine a target throughput for each of the workloads. The method concludes with a fifth step of adjusting access to the computing resource for each work load having a throughput limit different from about the target throughput by reducing or increasing the throughput limit for the workload to about the target throughput.
Abstract:
When a plurality of disk control apparatuses function as one disk control apparatus with a mutual connecting network, a processor is used as an independent resource. Moreover, states of use of resources are monitored, and processing from distribution of the resources to allocation of control tasks is optimized promptly so as to be compatible with a user request. By promptly making system performance compatible with the user request according to the present invention, a state in which the user request and the system performance are alienated from each other for a long time is eliminated.
Abstract:
Embodiments of the invention provide methods, systems, software and data structures for monitoring, analyzing, storing and/or collecting events on a monitored computer. In a set of embodiments, a monitoring process monitors one or more applications for events occurring in those application. The monitoring process, in some cases, runs in common a thread of execution with one or more of the applications. If the monitoring process detects an event, it might notify an event capture process, which might capture the event. In some embodiments, an analysis process might determine whether the event should be collected, and if so, maintain a representation of the event (perhaps in a specialized data structure). In other embodiments, a data management process is configured to store information about one or more events in an event cache, which might comprise a plurality of file streams and/or metafile streams, enabling efficient storage of information about events.
Abstract:
A method for minimizing latency of data transfer between the redundant storage controllers in a network-based storage controller system that utilizes adaptive data throttling. Each redundant storage controller monitors latency for round trip communications between the redundant controllers by calculating a time required to mirror a write to the other controller and receive a write acknowledge. An average latency for round trip communications between the redundant controllers during a fixed monitoring period is calculated, and at the end of each fixed monitoring period, the average latency is compared to a fixed latency to access the average time latency for mirroring writes is good, acceptable or unacceptable. If the average time latency is good, the one controller reduces or disables throttling for data transfers between the one controller and the server, and between the one controller and back-end storage, increasing a number of this type of data transfer that can be executed in parallel. If the average time latency is acceptable, the one controller does not adjust throttling for data transfers between the one controller and the server, and between the one controller and back-end storage. If the average time latency is unacceptable, the one controller increases data throttling for data transfers between the one controller and the server, and between the controller and the back-end storage, decreasing a number of this type of data transfer that can be executed in parallel.
Abstract:
A storage system including a maintenance terminal, at least one disk drive, and a plurality of volumes that are provided by the at least one disk drive, and each store therein data written by the plurality of host devices. In this storage system, the maintenance terminal sets information for use to measure the performance of the storage device, and the storage device acquires the set information, measures the performance of the storage device with respect to the data stored in the plurality of volumes based on the information, and transmits, to the maintenance terminal, performance information about the performance being a measurement result. The storage system as such can collect information about the performance of a storage device that is not measurable from the side of the host devices, and a method for collecting such performance information can be provided.
Abstract:
The trace logic are separate from the clocks that operate the system logic. This allows the chip to be placed in a special mode where the functional logic is issued one clock. One frame of trace data is generated for each functional clock issued. A valid signal may be implemented changing state when new information is generated. The trace logic, whose clock is free running, detects the change in state in the valid signal. It then processes the trace information presented to it, exporting this information to a trace recorder. When transmission of this information has created sufficient space to accept a new frame of trace information, the empty signal is generated. This causes the clock generation logic to issue another clock to the system logic.
Abstract:
A method, system, and computer program product are provided for verifying out of order instruction address (IA) stride prefetch performance in a processor design having more than one level of cache hierarchies. Multiple instruction streams are generated and the instructions loop back to corresponding instruction addresses. The multiple instruction streams are dispatched to a processor and simulation application to process. When a particular instruction is being dispatched, the particular instruction's instruction address and operand address are recorded in the queue. The processor is monitored to determine if the processor executes fetch and prefetch commands in accordance with the simulation application. It is checked to determine if prefetch commands are issued for instructions having three or more strides.
Abstract:
A method, apparatus, and computer-usable program code in a computer system for identifying a subset of a workload, which includes a total set of dynamic instructions, to use as a trace. Processor unit hardware executes the entire workload in real-time using a particular dataset. The processor unit hardware includes at least one microprocessor and at least one cache. The real-time execution of the workload is monitored to obtain information about how the processor unit hardware executes the workload when the workload is executed using the particular dataset to form actual performance information. Multiple different subsets of the workload are generated. The execution of each one of the subsets by the processor unit hardware is compared with the actual performance information. A result of the comparison is used to select one of the plurality of different subsets that roost closely represents the execution of the entire workload using the particular dataset to use as a trace.