Abstract:
An analysis system may perform network analysis on data gathered from an executing application. The analysis system may identify relationships between code elements and use tracer data to quantify and classify various code elements. In some cases, the analysis system may operate with only data gathered while tracing an application, while other cases may combine static analysis data with tracing data. The network analysis may identify groups of related code elements through cluster analysis, as well as identify bottlenecks from one to many and many to one relationships. The analysis system may generate visualizations showing the interconnections or relationships within the executing code, along with highlighted elements that may be limiting performance.
Abstract:
A module-specific tracing mechanism may trace the usage of a module on behalf of the module developer. The module may be used by multiple application developers, and the tracing system may collect and summarize data for the module in each of the different applications. The data may include usage data as well as performance data. Usage data may include anonymized data for each time the module may be invoked and called, and performance data may include the processing time, memory consumption, and other metrics. The module-specific tracing may be enabled or disabled by an application developer.
Abstract:
Execution sequence information may be analyzed and quantified using n-gram analysis of functions executed by an application. The sequences of functions may be represented by n-grams, and the frequency of the various n-grams may indicate the behavior of the application in production, which may be compared to a test suite whose coverage may be quantified using a similar n-gram analysis. A coverage factor may compare the observed behavior of the application in production to the test suite for the application. The n-grams may be further quantified or prioritized by resource utilization, and several visualizations may be generated from the data.
Abstract:
A tracing system may trace applications and their modules, and may make module-specific data available through various interfaces. The tracing system may collect tracer data while an application executes, and may preprocess the data into application-specific and module-specific databases. An analysis engine may further analyze and process these databases to create application-specific views and module-specific views into the data. The application-specific views may be intended for a developer of the application, while the module-specific views may have a public version accessible to everybody and a module developer version that may contain additional details that may be useful to the module developer.
Abstract:
A behavior model for a software application may identify a set of execution sequences that begin from a set of origins. The sequences may be further defined by a set of exits. In some cases, the sequences may be decomposed into subsequences or n-grams. The execution sequences and their frequencies may define a usage or behavior model for the application. The sequences may be defined by semantic level operations of an application, which may be defined by functions, call backs, API calls, or other blocks of code execution. The behavior model may be used for determining code coverage, comparing versions of applications, and other uses.
Abstract:
Regression testing of an application may gather performance tests for multiple functions within an application and determine when performance changes from one version of the application to another. The analysis may be further broken down by input sequences that may be processed by various functions. A detailed regression analysis may be presented as a heat map or other visualizations. A regression testing system may be launched during a build process by automatically launching a set of performance tests against an application. In many cases, the application may be executed in a system with a known or consistent performance capabilities. The application may be executed and tested in a new version and at least one prior version on the same hardware and software execution environment, so that results may be normalized from one execution run to another. A regression testing system may be deployed as a paid-for service that may integrate into a source code repository.
Abstract:
N-grams of input streams or functions executed by an application may be analyzed to identify security breaches or other anomalous behavior. A histogram of n-grams representing sequences of executed functions or input streams may be generated through baseline testing or production use. An alerting system may compare real time n-gram observations to the histogram of n-grams to identify security breaches or other changes in application behavior that may be anomalous. An alert may be generated that identifies the anomalous behavior. The alerting system may be trained using known good datasets and may identify deviations as bad behavior. The alerting system may be trained using known bad datasets and may identify matching behavior as bad behavior.
Abstract:
Comparisons of different versions of an application may be compared using a behavior model of the application. A behavior model may be derived from n-gram analysis of observations of the application in production. The behavior model may include sequences of inputs received by the application or functions performed by the application, where each sequence is an n-gram observed in tracer data. Each n-gram may be coupled with a resource consumption to give a behavior model with performance data. A regression analysis may apply a behavior model derived from a first version of an application to the performance observations of a new version to create an expected performance metric for the new version. A similarly calculated metric from a previous version may be compared to the metric from a new version to determine an improvement or degradation of performance.
Abstract:
Input sequence information may be analyzed and quantified using n-gram analysis of inputs received by an application. The sequences of inputs may be represented by n-grams, and the frequency of the various n-grams may indicate the ‘real world’ uses of the application in production, which may be compared to a test suite whose coverage may be quantified using a similar n-gram analysis. A coverage factor may compare the observed inputs to the application in production to the test suite for the application. The n-grams may be further quantified or prioritized by resource utilization and several visualizations may be generated from the data.
Abstract:
A tracing system may be updated to include, exclude, or modify tracing configurations for functions based on how a user consumes tracing results. The user's interactions with graphical representations, inspections of data, and other interactions may indicate which functions may be interesting and which functions may not be. The user's interactions may be classified by use, such as during debugging, performance testing, and ongoing monitoring, and multiple user's interactions with the same function, library, module, source code file, or other groups of functions may be combined to predict a user's interest in a function.