Abstract:
Systems and methods for implementing heterogeneous feature integration for device behavior analysis (HFIDBA) are provided. The method includes representing (620) each of multiple devices as a sequence of vectors for communications and as a separate vector for a device profile. The method also includes extracting (630) static features, temporal features, and deep embedded features from the sequence of vectors to represent behavior of each device. The method further includes determining (650), by a processor device, a status of a device based on vector representations of each of the multiple devices.
Abstract:
A method and system are provided for improving threat detection in a computer system by performing an inter-application dependency analysis on events of the computer system. The method includes receiving, by a processor operatively coupled to a memory, a Tracking Description Language (TDL) query including general constraints, a tracking declaration and an output specification, parsing, by the processor, the TDL query using a language parser, executing, by the processor, a tracking analysis based on the parsed TDL query, generating, by the processor, a tracking graph by cleaning a result of the tracking analysis, and outputting, by the processor and via an interface, query results based on the tracking graph.
Abstract:
A computer-implemented method for real-time detecting of abnormal network connections is presented. The computer-implemented method includes collecting network connection events from at least one agent connected to a network, recording, via a topology graph, normal states of network connections among hosts in the network, and recording, via a port graph, relationships established between host and destination ports of all network connections.
Abstract:
A method for ransomware detection and prevention includes receiving an event stream associated with one or more computer system events, generating user-added-value knowledge data for one or more digital assets by modeling digital asset interactions based on the event stream, including accumulating user-added-values of each of the one or more digital assets, and detecting ransomware behavior based at least in part on the user-added-value knowledge, including analyzing destruction of the user-added values for the one or more digital assets.
Abstract:
Systems and methods for determining a risk level of a host in a network include modeling (402) a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are determined (404). An anomaly score for the target host is determined (406) based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security management action is performed based on the anomaly score.
Abstract:
Methods and systems for detecting anomalous events include detecting anomalous events (42, 43) in monitored system data. An event correlation graph is generated (302) based on the monitored system data that characterizes the tendency of processes to access system targets. Kill chains are generated (310) that connect malicious events over a span of time from the event correlation graph that characterize events in an attack path over time by sorting events according to a maliciousness value and determining at least one sub-graph within the event correlation graph with an above-threshold maliciousness rank. A security management action is performed (412) based on the kill chains.
Abstract:
Methods and systems for detecting anomalous events include detecting anomalous events (42,43) in monitored system data. An event correlation graph is generated (302) by determining a tendency for a first process to access a system target, include an innate tendency of the first process to access the system target, an influence of previous events from the first process, and an influence of processes other than the first process. Kill chains are generated (310) from the event correlation graph that characterize events in an attack path over time. A security management action is performed (412) based on the kill chains.
Abstract:
A computer-implemented method for implementing a knowledge transfer based model for accelerating invariant network learning is presented. The computer-implemented method includes generating an invariant network from data streams, the invariant network representing an enterprise information network including a plurality of nodes representing entities, employing a multi-relational based entity estimation model for transferring the entities from a source domain graph to a target domain graph by filtering irrelevant entities from the source domain graph, employing a reference construction model for determining differences between the source and target domain graphs, and constructing unbiased dependencies between the entities to generate a target invariant network, and outputting the generated target invariant network on a user interface of a computing device.
Abstract:
Systems and methods for determining a risk level of a host in a network include modeling (402) a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are determined (404). An anomaly score for the target host is determined (406) based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security management action is performed based on the anomaly score.
Abstract:
A method is provided that includes transforming training data into a neural network based learning model using a set of temporal graphs derived from the training data. The method includes performing model learning on the learning model by automatically adjusting learning model parameters based on the set of the temporal graphs to minimize differences between a predetermined ground-truth ranking list and a learning model output ranking list. The method includes transforming testing data into a neural network based inference model using another set of temporal graphs derived from the testing data. The method includes performing model inference by applying the inference and learning models to test data to extract context features for alerts in the test data and calculate a ranking list for the alerts based on the extracted context features. Top-ranked alerts are identified as critical alerts. Each alert represents an anomaly in the test data.