Abstract:
A scheduler manages execution of a plurality of data-collection jobs, assigns individual jobs to specific forwarders in a set of forwarders, and generates and transmits tokens (e.g., pairs of data-collection tasks and target sources) to assigned forwarders. The forwarder uses the tokens, along with stored information applicable across jobs, to collect data from the target source and forward it onto an indexer for processing. For example, the indexer can then break a data stream into discrete events, extract a timestamp from each event and index (e.g., store) the event based on the timestamp. The scheduler can monitor forwarders' job performance, such that it can use the performance to influence subsequent job assignments. Thus, data-collection jobs can be efficiently assigned to and executed by a group of forwarders, where the group can potentially be diverse and dynamic in size.
Abstract:
Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on lower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
Abstract:
One or more processing devices create one or more entity definitions that each associate an entity with machine data pertaining to that entity and create a service definition for a service provided by one or more entities. The service definition includes an entity definition for each of the one or more entities. The one or more processing devices create one or more key performance indicators (KPIs). Each KPI is defined by a search query that produces a value derived from the machine data identified in one or more of the entity definitions included in the service definition. Each value is indicative of how the service is performing at a point in time or during a period of time.
Abstract:
The disclosed system and method acquire and store performance measurements relating to performance of a component in an information technology (IT) environment and log data produced by the IT environment, in association with corresponding time stamps. The disclosed system and method correlate at least one of the performance measurements with at least one of the portions of log data.
Abstract:
The disclosed system and method acquire and store performance measurements relating to performance of a component in an information technology (IT) environment and log data produced by the IT environment, in association with corresponding time stamps. The disclosed system and method correlate at least one of the performance measurements with at least one of the portions of log data.
Abstract:
Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on lower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
Abstract:
Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on lower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
Abstract:
A GUI displays information about a service, such as a service in an IT environment, and the specification for a related key performance indicator (KPI). The service can be provided by a number of entities (e.g., servers) for which machine data is generated and collected. The KPI is defined by a search query that produces a KPI value from the relevant machine data. Thresholds can aid in translating KPI values into a simpler, and possibly, common set of state values (e.g., Normal, Warning, Critical). A user can supply via a GUI, one or more thresholds that are applied on a per-entity basis in the process of determining a state value for the KPI.
Abstract:
Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on tower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
Abstract:
One or more processing devices create one or more entity definitions that each associate an entity with machine data pertaining to that entity and create a service definition for a service provided by one or more entities. The service definition includes an entity definition for each of the one or more entities. The one or more processing devices create one or more key performance indicators (KPIs). Each KPI is defined by a search query that produces a value derived from the machine data identified in one or more of the entity definitions included in the service definition. Each value is indicative of how the service is performing at a point in time or during a period of time.