Abstract:
The present invention is a method for automating database bufferpool tuning for optimized performance that employs certain heuristic algorithms to achieve its goals. Over a period of time, memory (bufferpool) performance is measured and accumulated in a repository. The repository becomes a knowledge base that is accessed by the algorithms and the ideal memory (bufferpool) configurations, which optimize database performance, are learned and implemented. The sampling of performance continues at regular intervals and the knowledge base continues to grow. As knowledge continues to accumulate, the algorithms are forbidden from becoming complacent. The ideal bufferpool memory configurations are regularly reevaluated to ensure they continue to be optimal given potential changes in the database's use or access patterns.
Abstract:
A method and system for gathering enriched web server activity data in a global communications network in which requested information files are cached at a plurality of network devices. With the prevalence of web caching on the Internet, the origin web servers do not serve the majority of requests for web site content. A single pixel clear Graphics Image Format (GIF) request is added to the HyperText Markup Language (HTML) source file for a web page. Appended to the GIF request is a Common Gateway Interface (CGI) string of data that contains enhanced web activity data information, including the number of images ( hits") that have to be retrieved by a client browser to build the web page, and the referring identifier that resulted in access to the web page. The single pixel clear GIF request is not cacheable and results in the request being transmitted to the origin web server when the client browser interprets the HTML file. The enriched data is stored in log files at the origin web server to accumulate an accurate number of hits on the web page.
Abstract:
A method for determining a process as shown in figure 1 to use for converting instructions in a target instruction set to instructions in a host instructions set including the steps of executing code morphing software including an interpreter and a translator to generate host instructions from target instructions, detecting at intervals whether the interpreter or the translator is executing, increasing a count if the interpreter is executing and decreasing the count if the translator is executing, and changing from interpreting to translating a sequence of target instructions when the count reaches a selected maximum.
Abstract:
Load balancing of activities on physical disk storage devices is accomplished by monitoring reading and writing operations to blocks of contiguous storage locations on the physical disk storage devices. A list of exchangeable pairs of blocks is developed based on size and function. Statistics accumulated over an interval are then used to obtain access activity values for each block and each physical disk drive. A statistical analysis leads to a selection of one block pair. After testing to determine any adverse effect of making that change, the exchange is made to more evenly distribute the loading on individual physical disk storage devices.
Abstract:
The AutoPilot (54) performance optimization module is a part of the Performance Assistant family (52) which is designed to dynamically optimize and balance the performance of multiprocessor computer systems. AutoPilot (54) utilizes proactive hardware monitoring capabilities supplied through the Performance Assistant architecture to monitor a computer system's workload and make performance adjustments in real time.
Abstract:
Method and structure for collecting statistics for quantifying locality of data and thus selecting elements to be cached, and then calculating the overall cache hit rate as a function of cached elements. LRU stack distance has a straight-forward probabilistic interpretation and is part of statistics to quantify locality of data for each element considered for caching. Request rates for additional slots in the LRU are a function of file request rate and LRU size. Cache hit rate is a function of locality of data and the relative request rates for data sets. Specific locality parameters for each data set and arrival rate of requests for data sets are used to produce an analytical model for calculating cache hit rate for combinations of data sets and LRU sizes. This invention provides algorithms that can be directly implemented in software for constructing a precise model that can be used to predict cache hit rates for a cache, using statistics accumulated for each element independently. The model can rank the elements to find the best candidates for caching. Instead of considering the cache as a whole, the average arrival rates and re-reference statistics for each element are estimated, and then used to consider various combinations of elements and cache sizes in predicting the cache hit rate. Cache hit rate is directly calculated using the to-be-cached files' arrival rates and re-reference statistics and used to rank the elements to find the set that produces the optimal cache hit rate.