Abstract:
A load generator services marketplace may configure and deploy load generators in conjunction with executing an application. The load generators may be selected based on a solution definition, which may include the types of loads and conditions under which loads may be generated. One or more load generators may be configured to operate with a monitoring service, and a connection manager may cause the load generators, application, and monitoring service to execute simultaneously so that the monitoring service may capture performance metrics while the application experiences the load. The marketplace may have load generators from multiple providers and with multiple configurations, as well as a clearinghouse for clearing a financial transaction as the load generators are used.
Abstract:
A tracing system may divide trace objectives across multiple instances of an application, then deploy the objectives to be traced. The results of the various objectives may be aggregated into a detailed tracing representation of the application. The trace objectives may define specific functions, processes, memory objects, events, input parameters, or other subsets of tracing data that may be collected. The objectives may be deployed on separate instances of an application that may be running on different devices. In some cases, the objectives may be deployed at different time intervals. The trace objectives may be lightweight, relatively non-intrusive tracing workloads that, when results are aggregated, may provide a holistic view of an application's performance.
Abstract:
A distributed tracing system may use independent trace objectives for which a profile model may be created. The profile model may be deployed as a monitoring agent on non-instrumented devices to evaluate the profile models. As the profile models operate with statistically significant results, the sampling frequencies may be adjusted. The profile models may be deployed as a verification mechanism for testing models created in a more highly instrumented environment, and may gather performance related results that may not have been as accurate using the instrumented environment. In some cases, the profile models may be distributed over large numbers of devices to verify models based on data collected from a single or small number of instrumented devices.
Abstract:
An offline optimization for computer software may involve creating optimized parameters or components for a software product, and charging customers for the optimization service. The software product may be distributed under one licensing regime and the optimization components may be distributed under a second licensing regime. In some embodiments, a low cost or no-cost monitoring system may be provided, which may interface with a remote service that optimizes the software product for its current workload. A user may pay for the remote optimization service through a subscription, pay-per-use, pay-for-performance, or other payment models.
Abstract:
A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed prior to runtime. A compiler or other pre-execution system may detect a memory allocation boundary and decorate the code. During execution, the decorated code may be used to look up memory allocation and management settings from a database or to deploy optimized settings that may be embedded in the decorations.
Abstract:
N-grams of input streams or functions executed by an application may be analyzed to identify security breaches or other anomalous behavior. A histogram of n-grams representing sequences of executed functions or input streams may be generated through baseline testing or production use. An alerting system may compare real time n-gram observations to the histogram of n-grams to identify security breaches or other changes in application behavior that may be anomalous. An alert may be generated that identifies the anomalous behavior. The alerting system may be trained using known good datasets and may identify deviations as bad behavior. The alerting system may be trained using known bad datasets and may identify matching behavior as bad behavior.
Abstract:
Comparisons of different versions of an application may be compared using a behavior model of the application. A behavior model may be derived from n-gram analysis of observations of the application in production. The behavior model may include sequences of inputs received by the application or functions performed by the application, where each sequence is an n-gram observed in tracer data. Each n-gram may be coupled with a resource consumption to give a behavior model with performance data. A regression analysis may apply a behavior model derived from a first version of an application to the performance observations of a new version to create an expected performance metric for the new version. A similarly calculated metric from a previous version may be compared to the metric from a new version to determine an improvement or degradation of performance.
Abstract:
Input sequence information may be analyzed and quantified using n-gram analysis of inputs received by an application. The sequences of inputs may be represented by n-grams, and the frequency of the various n-grams may indicate the ‘real world’ uses of the application in production, which may be compared to a test suite whose coverage may be quantified using a similar n-gram analysis. A coverage factor may compare the observed inputs to the application in production to the test suite for the application. The n-grams may be further quantified or prioritized by resource utilization and several visualizations may be generated from the data.
Abstract:
A tracing system may be updated to include, exclude, or modify tracing configurations for functions based on how a user consumes tracing results. The user's interactions with graphical representations, inspections of data, and other interactions may indicate which functions may be interesting and which functions may not be. The user's interactions may be classified by use, such as during debugging, performance testing, and ongoing monitoring, and multiple user's interactions with the same function, library, module, source code file, or other groups of functions may be combined to predict a user's interest in a function.
Abstract:
A database of module performance may be generated by adding tracing components to applications, as well as by adding tracing components to modules themselves. Modules may be reusable code that may be made available for reuse across multiple applications. When tracing is performed on an application level, the data collected from each module may be summarized in module-specific databases. The module-specific databases may be public databases that may assist application developers in selecting modules for various tasks. The module-specific databases may include usage and performance data, as well as stability and robustness metrics, error logs, and analyses of similar modules. The database may be accessed through links in module description pages and repositories, as well as through a website or other repository.