Abstract:
A tracing system may use different configurations for tracing various functions in different manners. A configuration may be a group of settings that may define which data elements to collect, as well as the manner in which the data may be summarized, stored, and in some cases, displayed. Example configurations may include debugging configuration, performance optimization configuration, long term monitoring configuration, and others. The tracing system may be able to trace one group of functions with one configuration, while tracing another group of functions in the same application using a different configuration.
Abstract:
A performance state machine is controlled in part by identifying notifications from an execution trace of an application program, through rapid automatic comparison of trace events to notification events for notification categories. Some notification categories include application startup, page outline load, page data load start, page data load finish, page to page transition, application input, window size change, media query, binding update, page background task start, page background task finish, developer-defined scenario start, and developer-defined scenario finish. Notifications may reflect heuristics such as the time from startup to first frame submission. A state is placed in the performance state machine for each identified notification, with aggregate application performance data for each transition between identified notifications. Some performance data categories include network activity, disk activity, memory usage, parse time, frame time, dropped frames, component or overall frame rates, and thread utilization. Timelines and other visual representations aid application performance optimization.
Abstract:
A method and a device for determining a program performance interference model is described. The method includes: selecting programs from a determined sample program set to form multiple subsets; acquiring a value of performance interference imposed on each program in each subset and a total occupancy rate of a shared resource occupied by all the programs in each subset; dividing all the subsets into multiple analytical units; performing a regression analysis on the value of performance interference on each sample program included in each analytical unit and a total occupancy rate corresponding to a subset in which the sample program is loaded, and acquiring a target function model; acquiring a performance interference model corresponding to a target program according to the target function model. The performance interference model may be used for preventing another program whose mutual interference is relatively strong from running together with the target program.
Abstract:
A method, system, and computer readable medium for managing applications on an application execution system are disclosed. On an application server the number of instances of a first application type that are in a busy state is determined. This determination is performed at each respective time interval in a plurality of time intervals. Then, a first running average for the busy state of the first application type is computed based upon the number of instances of the first application type that are in a busy state, at the application server, at each respective time interval. A removal request is sent when the first running average for the busy state meets a first removal criterion. The removal request is a request to remove the application server from a data structure that specifies which of a plurality of application servers accept service requests for the first application type.
Abstract:
Methods and systems for tracking user interactions with a computer application. As a computer application is used, it keeps track of user interactions, for example, for use on an analytics server. An interaction tracking configuration may specify which events are tracked, under what conditions the events are tracked, and identify what information is recorded. This configuration may be separated from the application. For example, the configuration may be stored in a configuration file at a location specified within and used by an application. The configuration may then be changed without changing a deployed application. Certain embodiments provide a tracking configuration tool to facilitate the creation of such a configuration. The tool may use a running application to identify events for tracking. Identifying events in this way can simplify the task of configuring interaction tracking by reducing the need for understanding or accessing of the actual code of the application.
Abstract:
Method and system are provided for providing elapsed time indications for source code in a development environment. The method includes: defining blocks of source code to be timed during source code execution; monitoring defined blocks of source code during execution to determine an elapsed time for the execution of the defined block of source code; recording the elapsed time for a defined block of source code; and providing an elapsed time indication for the defined block of source code.
Abstract:
A processor, a method and a computer-readable medium for recording branch addresses are provided. The processor comprises hardware registers and first and second circuitry. The first circuitry is configured to store a first address associated with a branch instruction in the hardware registers. The first circuitry is further configured to store a second address that indicates where the processor execution is redirected to as a result of the branch instruction in the hardware registers. The second circuitry is configured to, in response to a second instruction, retrieve a value of at least one of the registers. The second instruction can be a user-level instruction.
Abstract:
Restoring from a legacy OS environment to a Unified Extensible Firmware Interface (UEFI) pre-boot environment, including: storing, under the UEFI pre-boot environment, context in the UEFI pre-boot environment that needs to be preserved, where the context in the UEFI pre-boot environment that needs to be preserved includes CPU execution context; restoring a first portion of the CPU execution context in response to the UEFI pre-boot environment failing to load the legacy OS; making a CPU associated with the UEFI pre-boot environment enter into System Management Mode, and restoring a second portion of the CPU execution context under the System Management Mode; and exiting from CPU System Management Mode, thereby returning to the UEFI pre-boot environment.
Abstract:
One embodiment provides an apparatus. The apparatus includes a processor, a chipset, a memory to store a process, and logic. The processor includes one or more core(s) and is to execute the process. The logic is to acquire performance monitoring data in response to a platform processor utilization parameter (PUP) greater than a detection utilization threshold (UT), identify a spin loop based, at least in part, on at least one of a detected hot function and/or a detected hot loop, modify the identified spin loop using binary translation to create a modified process portion, and implement redirection from the identified spin loop to the modified process portion.
Abstract:
Cost-based optimization of configuration parameters and cluster sizing for distributed data processing systems are disclosed. According to an aspect, a method includes receiving at least one job profile of a MapReduce job. The method also includes using the at least one job profile to predict execution of the MapReduce job within a plurality of different predetermined settings of a distributed data processing system. Further, the method includes determining one of the predetermined settings that optimizes performance of the MapReduce job. The method may also include automatically adjusting the distributed data processing system to the determined predetermined setting.