Abstract:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.
Abstract:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.
Abstract:
A method and an apparatus for characterizing performance of a device based on user-perceivable latency. To characterize device performance, a value of a metric may be computed from latencies of operations performed by the device. In computing a value of a metric, latencies may be treated differently, such that some latencies perceivable by a user of the device may have a greater impact on the value of the metric than other latencies that either are not perceivable or are perceived by the user to a lesser degree. Such a performance metric based on user-perceivable latency facilitates identification of computing device that provide a desirable user experience.
Abstract:
Indexing documents is performed using low priority I/O requests. This aspect can be implemented in systems having an operating system that supports at least two priority levels for I/O requests to its filing system. Low priority I/O requests can be used for accessing documents to be indexed. Low priority I/O requests can also be used for writing information into the index. Higher priority requests can be used for I/O requests to access the index in response queries from a user. I/O request priority can be set on a per-thread basis as opposed to being set on a per-process basis (which may generate two or more threads for which it may be desirable to assign different priorities).
Abstract:
A method and an apparatus for characterizing performance of a device based on user-perceivable latency. To characterize device performance, a value of a metric may be computed from latencies of operations performed by the device. In computing a value of a metric, latencies may be treated differently, such that some latencies perceivable by a user of the device may have a greater impact on the value of the metric than other latencies that either are not perceivable or are perceived by the user to a lesser degree. Such a performance metric based on user-perceivable latency facilitates identification of computing device that provide a desirable user experience.
Abstract:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.
Abstract:
A computer system, method and computer readable medium for memory management with intelligent trimming of pages of working sets are disclosed. The computer system has memory space allocatable in chunks, known as pages, to specific application programs or processes. The pages allocated to a specific application program or process define a working set of pages for the program or process. Occasionally, a system runs short of free memory space and needs to reduce the size of working sets using a process called trimming. A trimming method is disclosed that estimates numbers of trimmable pages for working sets based upon a measure of how much time has elapsed since the memory pages were last accessed by the corresponding application program. This estimation is performed prior to the need to trim working sets, and the trimming method uses these estimates to facilitate faster and more accurate trimming and thus faster recovery from shortages of free memory.