-
公开(公告)号:US20250138819A1
公开(公告)日:2025-05-01
申请号:US18496722
申请日:2023-10-27
Applicant: CrowdStrike, Inc.
Inventor: Damian Monea , Paul Sumedrea , Mihaela-Petruta Gaman , Alexandru Dinu
Abstract: An approach is provided that provides a plurality of source code samples to an artificial intelligence model (AIM) trained to describe source code based on performing semantic analysis on the source code. The approach produces, using the AIM, a plurality of semantic descriptions that describe the plurality of source code samples. Then, the approach converts the plurality of semantic descriptions into a plurality of semantic embeddings. In turn, the approach creates a plurality of clusters from the plurality of semantic embeddings, wherein each one of the plurality of clusters corresponds to two or more of the plurality of source code samples.
-
公开(公告)号:US12204644B1
公开(公告)日:2025-01-21
申请号:US18622167
申请日:2024-03-29
Applicant: CrowdStrike, Inc.
Inventor: Stefan-Bogdan Cocea , Damian Monea , Alexandru Dinu , Cristian Viorel Popa , Mihaela-Petruta Gaman
Abstract: The present disclosure provides an approach of providing, to an artificial intelligence (AI) model, a malicious script that includes a malicious behavior. The AI model is configured to modify software code of the malicious script to produce modified software code that obfuscates the malicious behavior. The approach produces, by a processing device using the AI model, an adversarial script that includes the modified software code that obfuscates the malicious behavior. In turn, the approach initiates a malware detector to test the adversarial script.
-
3.
公开(公告)号:US20240338445A1
公开(公告)日:2024-10-10
申请号:US18132340
申请日:2023-04-07
Applicant: CrowdStrike, Inc.
Inventor: Cristian Viorel Popa , Stefan-Bogdan Cocea , Alexandru Dinu , Paul Sumedrea
IPC: G06F21/56
CPC classification number: G06F21/564 , G06F21/568
Abstract: Methods and systems for applying a diffusion model to adversarial purification and generating adversarial samples in malware detection are disclosed. According to an example, a malware file is inputted to a diffusion model to obtain an adversarial sample by altering content of the malware file. The adversarial sample is further tested by a malware detector. In some examples, the content of an input file may be encoded prior to be processed by the diffusion model. If the malware detector can identify the adversarial sample as a malware file, the diffusion model is updated to further alter the content until the adversarial sample successfully deceives the malware detector. According to another example, an executable file is purified using a diffusion model prior to be inputted to a malware detector. The diffusion model may remove potential malware content from the executable file, thus improving the performance of the malware detector.
-
-