Abstract:
A processor monitors, directly or indirectly, the amount of time it takes for the memory controller to respond to one or more memory access requests. When this memory access latency indicates that a memory latency tolerance of a program thread has been exceeded, the processor can apportion additional power to the memory controller, thereby increasing the speed with which the memory controller can process memory access requests.
Abstract:
Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, and migrating a portion of the storage volumes from the cache to storage tape in the first VTS.
Abstract:
Certain example embodiments relate to using Complex Event Processing (CEP) techniques for statistical analysis of cache behavior and parameters, e.g., in connection with large, potentially heterogeneous data sets (e.g., “Big Data”). A dedicated stream mining operator registers a listener to a cache and receives notifications on cache operations. For selected element attributes, a first model estimates the probability density functions of the attribute values, delivering well-defined estimates of the attribute value distributions. A second model analyzes the time elements stay in the cache (“validity”). Validity is combined with the attribute value distribution. A meaningful analysis model (Cache Element Model) can be derived by combining additional summary statistics for the validity with the attribute value distribution, describing how long elements stay in the cache for attribute values of a specific region, and how the values are distributed. It may be used to inform administrative tasks such as, optimization of cache parameters.
Abstract translation:某些示例性实施例涉及使用复杂事件处理(CEP)技术来对高速缓存行为和参数进行统计分析,例如结合大的潜在异质数据集(例如,“大数据”)。 一个专用的流挖掘操作员将一个监听器注册到缓存并接收高速缓存操作的通知。 对于所选择的元素属性,第一模型估计属性值的概率密度函数,提供对属性值分布的明确定义的估计。 第二个模型分析缓存中的时间元素(“有效性”)。 有效性与属性值分布相结合。 可以通过将有效性的附加摘要统计信息与属性值分布相结合来描述有意义的分析模型(Cache Element Model),描述元素在特定区域的属性值中停留在缓存中的时间以及值的分布情况。 它可以用于通知管理任务,如优化缓存参数。
Abstract:
Embodiments of the present invention provide an adaptive cache system and an adaptive cache system for a hybrid storage system. Specifically, in a typical embodiment, an input/out (I/O) traffic analysis component is provided for monitoring data traffic and providing a traffic analysis based thereon. An adaptive cache algorithm component is coupled to the I/O traffic analysis component for applying a set of algorithms to determine a storage schema for handling the data traffic. Further, an adaptive cache policy component is coupled to the adaptive cache algorithm component.
Abstract:
A service assigns session identifiers to usage sessions of a program on a computing device, and maintains records in a log of received page requests and associated session identifiers, as well as received cached data detection requests and associated session identifiers. This log can be used to determine how many usage sessions existed over a particular amount of time, and how many of the usage sessions used data from a local cache rather than from the service. The service also returns, in response to a received cached data detection request, a response including an indication that the response is from the service. The program can determine that the response was received from the service if the indication is included in the response, and that the response was received from a local cache of the computing device if the indication is not included in the response.
Abstract:
A content delivery network includes a plurality of cache servers. Each cache server is configured to receive a request for content from a client system and receive content and security data from a content server. Each cache server is further configured to provide the content to the client system and provide the security data to a monitoring system.
Abstract:
According to one aspect of the present disclosure a method and technique for monitoring memory access is disclosed. The method includes monitoring access to a memory unit, updating an activity cache associated with an incrementor with access data corresponding to accesses to the memory unit, monitoring a rate of access to the memory unit, adjusting a sample rate of the access data for storage in the memory unit based on the rate of access, and scaling a value of the access data based on the sample rate.
Abstract:
Methods, systems, and products for determining performance of a software entity running on a data processing system. The method comprises allowing extended execution of the software entity without monitoring code. The method also comprises intermittently sampling behavior data for the software entity. Intermittently sampling behavior data may be carried out by injecting monitoring code into the software entity to instrument the software entity, collecting behavior data by utilizing the monitoring code, and removing the monitoring code. The method also comprises repeatedly performing iterations of the allowing and sampling steps until collected behavior data is sufficient for diagnosing performance of the software entity. The method may further comprise analyzing the collected behavior data to diagnose performance of the software entity.
Abstract:
An indication that an event occurred is received from a processor by a dual outcome event monitoring unit. It is determined whether the event is associated with an increment event or a decrement event. In response to determining that the event is associated with the increment event, an event counter is incremented. The event counter is part of the dual outcome monitoring unit. In response to determining that the event is associated with the decrement event, the event counter is decremented.
Abstract:
Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, migrating a portion of the storage volumes from the cache to storage tape in the first VTS, and migrating a remaining portion of the storage volumes from the cache to storage tape in the second VTS.