Abstract:
Methods and systems for dependency tracking include identifying a hot process that generates bursts of events with interleaved dependencies. Events related to the hot process are aggregated according to a process-centric dependency approximation that ignores dependencies between the events related to the hot process. Causality in a reduced event stream that comprises the aggregated events is tracked.
Abstract:
Systems and methods for detection and prevention of Return-Oriented-Programming (ROP) attacks in one or more applications, including an attack detection device and a stack inspection device for performing stack inspection to detect ROP gadgets in a stack. The stack inspection includes stack walking from a stack frame at a top of the stack toward a bottom of the stack to detect one or more failure conditions, determining whether a valid stack frame and return code address is present; and determining a failure condition type if no valid stack frame and return code is present, with Type III failure conditions indicating an ROP attack. The ROP attack is contained using a containment device, and the ROP gadgets detected in the stack during the ROP attack are analyzed using an attack analysis device.
Abstract:
A method for implementing automated threat alert triage via data provenance includes receiving a set of alerts and security provenance data, separating true alert events within the set of alert events corresponding to malicious activity from false alert events within the set of alert events corresponding to benign activity based on an alert anomaly score assigned to the at least one alert event, and automatically generating a set of triaged alert events based on the separation.
Abstract:
A computer-implemented method for implementing a knowledge transfer based model for accelerating invariant network learning is presented. The computer-implemented method includes generating an invariant network from data streams, the invariant network representing an enterprise information network including a plurality of nodes representing entities, employing a multi-relational based entity estimation model for transferring the entities from a source domain graph to a target domain graph by filtering irrelevant entities from the source domain graph, employing a reference construction model for determining differences between the source and target domain graphs, and constructing unbiased dependencies between the entities to generate a target invariant network, and outputting the generated target invariant network on a user interface of a computing device.
Abstract:
Systems and methods for determining a risk level of a host in a network include modeling (402) a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are determined (404). An anomaly score for the target host is determined (406) based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security management action is performed based on the anomaly score.
Abstract:
A method and system are provided for causality analysis of Operating System-level (OS-level) events in heterogeneous enterprise hosts. The method includes storing (720F), by the processor, the OS-level events in a priority queue in a prioritized order based on priority scores determined from event rareness scores and event fanout scores for the OS-level events. The method includes processing (720G), by the processor, the OS-level events stored in the priority queue in the prioritized order to provide a set of potentially anomalous ones of the OS-level events within a set amount of time. The method includes generating (720G), by the processor, a dependency graph showing causal dependencies of at least the set of potentially anomalous ones of the OS-level events, based on results of the causality dependency analysis. The method includes initiating (730), by the processor, an action to improve a functioning of the hosts responsive to the dependency graph or information derived therefrom.
Abstract:
Methods and systems for detecting security intrusions include detecting alerts in monitored system data. Temporal dependencies are determined (306) between the alerts based on a prefix tree formed from the detected alerts. Content dependencies between the alerts are determined (308) based on a distance between alerts in a graph representation of the detected alerts. The alerts are ranked (310) based on an optimization problem that includes the temporal dependencies and the content dependencies. A security management action (614) is performed based on the ranked alerts.
Abstract:
A method and system for constructing behavior queries in temporal graphs using discriminative sub-trace mining. The method (100) includes generating system data logs to provide temporal graphs (102), wherein the temporal graphs include a first temporal graph corresponding to a target behavior and a second temporal graph corresponding to a set of background behaviors (102), generating temporal graph patterns for each of the first and second temporal graphs to determine whether a pattern exists between a first temporal graph pattern and a second temporal graph pattern, wherein the pattern between the temporal graph patterns is a non-repetitive graph pattern (104), pruning the pattern between the first and second temporal graph patterns to provide a discriminative temporal graph (106), and generating behavior queries based on the discriminative temporal graph (110).
Abstract:
Methods and systems for intrusion attack recovery include monitoring two or more hosts in a network to generate audit logs of system events. One or more dependency graphs (DGraphs) is generated based on the audit logs. A relevancy score for each edge of the DGraphs is determined. Irrelevant events from the DGraphs are pruned to generate a condensed backtracking graph. An origin is located by backtracking from an attack detection point in the condensed backtracking graph.
Abstract:
Systems and methods for detection and prevention of Return-Oriented-Programming (ROP) attacks in one or more applications, including an attack detection device and a stack inspection device for performing stack inspection to detect ROP gadgets in a stack. The stack inspection includes stack walking from a stack frame at a top of the stack toward a bottom of the stack to detect one or more failure conditions, determining whether a valid stack frame and return code address is present; and determining a failure condition type if no valid stack frame and return code is present, with Type III failure conditions indicating an ROP attack. The ROP attack is contained using a containment device, and the ROP gadgets detected in the stack during the ROP attack are analyzed using an attack analysis device.