Abstract:
A computer-implemented method, system, and computer-readable media are disclosed herein. In embodiments, the computer-implemented method may entail receiving, by a data service, live data associated with an entity. The entity may be, for example, a customer of the data service. The method may further include determining that a dual-queue node assigned to the entity is uninstantiated on the data service. As a result, a dual-queue node associated with the entity may be instantiated on the data service. The dual-queue node may be instantiated by initializing a live data queue, of the dual-queue node, in which to place the live data for processing and a stale data queue, of the dual-queue node, in which to store a persistent backup of the live data. The method may then route the live data to the dual-queue node. The dual-queue node may then process the live data. Additional embodiments are described and/or claimed.
Abstract:
Systems and methods are presented for reducing load database time in a database backup process. In some embodiments, a computer-implemented method may include marking a checkpoint in a log of the database; generating a backup of the database for data up to the checkpoint; recording first changes in the database while generating the backup of the database; adding to the backup of the database an additional backup of the recording of the first changes; recording second changes in the database while adding the additional backup; determining if a number of second changes satisfies a criterion; and if the number of second changes satisfies the criterion, then adding to the backup of the database a backup of the recorded second changes. Recording these changes can enable a database dump process to contain more recent page images, so that the amount of recovery at load time is reduced.
Abstract:
Performance thresholds are defined for operators in a flow graph for a streaming application. A streams manager deploys the flow graph to one or more virtual machines (VMs). The performance of each portion of the flow graph on each VM is monitored. A VM is selected. When the performance of the portion of the flow graph in the selected VM does not satisfy the defined performance threshold(s), a determination is made regarding whether the portion of the flow graph is underperforming or overperforming. When the portion of the flow graph is underperforming, the portion of the flow graph is split into multiple portions that are implemented on multiple VMs. When the portion of the flow graph is overperforming, a determination is made of whether a neighbor VM is also overperforming. When a neighbor VM is also overperforming, the two VMs may be coalesced into a single VM.
Abstract:
Analysis is performed on a collection of data that is recorded for the storage system during a first time frame. The recorded collection of data includes a plurality of performance parameters that are determined from, for example, diagnostic tools that continually operate on the storage system. A set of baseline values are determined for each of the plurality of performance parameters by analyzing the recorded collection of data from an older portion of the time frame. For each parameter, a set of performance parameter values obtained from a recent portion of the time frame is compared to a corresponding baseline value of that performance parameter. From performing the comparison, one or more anomalies that are indicative of a particular problem on the storage system are determined for one or more of the plurality of performance parameters.
Abstract:
An adaptive mechanism is provided that learns the response time characteristics of a workload by measuring the response times of end user transactions, classifies response times into buckets, and dynamically adjusts the response time distribution as response time characteristics of the workload change. The adaptive mechanism maintains the actual distribution across changes and, thus, helps the end user to understand changes of workload behavior that take place over a longer period of time. The mechanism is stable enough to suppress spikes and returns a constant view of workload behavior, which is required for long term, performance analysis and capacity planning. The mechanism distinguishes between an initial learning phase of establishing the distribution and one or multiple reaction periods. The reaction periods can be for example a fast reaction period for strong fluctuations of the workload behavior and a slow reaction period for small deviations.
Abstract:
An incompatible software level of an information technology infrastructure component is determined by comparing collected inventory information to a minimum recommended software level. If a knowledge base search finds that the incompatible software level is associated with a prior infrastructure outage event, an outage count score is determined for the incompatible software level by applying an outage rule to a historic count of outages caused by a similar incompatible software level, and combined with an average outage severity score assigned to the incompatible software level based on a level of severity of an actual historic failure of the component within a context of the infrastructure to generate a normalized historical affinity risk score. The normalized historical affinity risk score is provided for prioritizing the correction of the incompatible software level in the context of other normalized historical risk level scores of other determined incompatible software levels.
Abstract:
In an embodiment, a processor includes a vector execution unit having a plurality of lanes to execute operations on vector operands, a performance monitor coupled to the vector execution unit to maintain information regarding an activity level of the lanes, and a control logic coupled to the performance monitor to control power consumption of the vector execution unit based at least in part on the activity level of at least some of the lanes. Other embodiments are described and claimed.
Abstract:
A computing device receives a plurality of writes; each write is comprised of chunks of data. The computing device records metrics associated with the deduplication of the chunks of data from the plurality of writes. The computing device generates groups based on associating each group with a portion of a range of the metrics, such that each of the chunks of data are associated with one of the groups, and a similar number of chunks of data are associated with each group. The computing device determines a deduplication affinity for each of the groups based on the chunks of data that are duplicates and at least one metric. The computing device sets a threshold for the deduplication affinity and in response to any of the groups exceeding the threshold, the computing device excluding the chunks of data associated with a group exceeding the threshold, from deduplication.
Abstract:
Disclosed are a system and method of integrating an on-demand compute environment into a local compute environment. The method includes receiving a request from an administrator to integrate an on-demand compute environment into a local compute environment and, in response to the request, automatically integrating local compute environment information with on-demand compute environment information to make available resources from the on-demand compute environment to requestors of resources in the local compute environment.
Abstract:
A method, system, and computer program include receiving a request string, and mapping the received request string to a distinguishable request string and a collapsible request string. The received request string may be in the form of a JSP, a servlet, and remote Enterprise Java Bean calls. A user may be prompted to create rules for mapping of a received request string to a distinguishable request string and a collapsible request string.