Abstract:
A method is provided for collaborative caching between a server cache (104) of a server computer (102) and an array cache (112) of a storage array (110) coupled to the server computer. The method includes collecting instrumentation data on the server cache and the array cache of the storage array and, based on the instrumentation data, adjusting the operation of at least one of the server cache and the array cache.
Abstract:
Embodiments of the present invention provide an adaptive cache system and an adaptive cache system for a hybrid storage system. Specifically, in a typical embodiment, an input/out (I/O) traffic analysis component is provided for monitoring data traffic and providing a traffic analysis based thereon. An adaptive cache algorithm component is coupled to the I/O traffic analysis component for applying a set of algorithms to determine a storage schema for handling the data traffic. Further, an adaptive cache policy component is coupled to the adaptive cache algorithm component. The adaptive cache policy component applies a set of caching policies and makes storage determinations based on the traffic analysis and the storage schema. Based on the storage determinations, data traffic can be stored (e.g., cached) among a set of storage devices coupled to the adaptive cache policy component. Such storage components can include one or more of the following: a low-high cache, a low-mid-high cache, a low speed storage component, a middle speed storage component and/or a high speed storage component.
Abstract:
A method and apparatus for providing a memory model for hardware attributes to support transactional execution is herein described. Upon encountering a load of a hardware attribute, such as a test monitor operation to load a read monitor, write monitor, or buffering attribute, a fault is issued in response to a loss field indicating the hardware attribute has been lost. Furthermore, dependency actions, such as blocking and forwarding, are provided for the attribute access operations based on address dependency and access type dependency. As a result, different scenarios for attribute loss and testing thereof are allowed and restricted in a memory model.
Abstract:
An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprises: a monitoring component operable in communication with the network for acquiring performance data from the managed physical storage and the virtual storage; and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage. The apparatus may further comprise a queue controller component responsive to the monitoring component for adjusting queue parameters for the managed physical storage. The monitoring component, the cache controller component and the queue controller component may be configured to operate periodically during operation of the virtualized storage system.
Abstract:
Embodiments of apparatuses, methods, and systems for virtualizing performance counters are disclosed. In one embodiment, an apparatus includes a counter, a counter enable storage location, counter enable logic, and virtual machine control logic. The counter enable storage location is store a counter enable indicator. The counter enable logic is to enable the counter based on the counter enable indicator. The virtual machine control logic is to transfer control of the apparatus to a guest. The virtual machine control logic includes guest state load logic to cause a guest value from a virtual machine control structure to be loaded into the counter enable storage location in connection with a transfer of control of the apparatus to a guest.
Abstract:
Reuse distance is the number of data which are accessed between accesses of a datum. The computation of reuse distance uses a search tree and is carried out through approximate analysis, pattern recognition, or distance-based sampling. The reuse distance can be used to detect reference affinity, that is, to detect which data are accessed together.
Abstract:
A method and system for gathering enriched web server activity data in a global communications network in which requested information files are cached at a plurality of network devices. With the prevalence of web caching on the Internet, the origin web servers do not serve the majority of requests for web site content. A single pixel clear Graphics Image Format (GIF) request is added to the HyperText Markup Language (HTML) source file for a web page. Appended to the GIF request is a Common Gateway Interface (CGI) string of data that contains enhanced web activity data information, including the number of images ( hits") that have to be retrieved by a client browser to build the web page, and the referring identifier that resulted in access to the web page. The single pixel clear GIF request is not cacheable and results in the request being transmitted to the origin web server when the client browser interprets the HTML file. The enriched data is stored in log files at the origin web server to accumulate an accurate number of hits on the web page.
Abstract:
A system for monitoring and evaluating the performance of a network accessible application comprises one or more load servers (170), each of which is capable of simulating the load imposed upon the application server (150) by one or more clients (130). The load servers (170) are configured to execute a particular sequence of server requests in order to evaluate the operation of the server (110) under the specified load. Various performance metrics associated with the operation of the network and the application server (150) are measured during the testing of the server (110), and these metrics are stored for later access by an analysis module (190). The analysis module (190) identifies those portions of the test data which are statistically significant and groups these significant parameters to suggest possible relationships between the conditions of the load test and the observed performance results.
Abstract:
A system for monitoring and evaluating the performance of a network accessible application comprises one or more load servers, each of which is capable of simulating the load imposed upon the application server by one or more clients. The load servers are configured to execute a particular sequence of server requests in order to evaluate the operation of the server under the specified load. Various performance metrics associated with the operation of the network and the application server are measured during the testing of the server, and these metrics are stored for later access by an analysis module. The analysis module identifies those portions of the test data which are statistically significant and groups these significant parameters to suggest possible relationships between the conditions of the load test and the observed performance results.