T-CELL RECEPTOR REPERTOIRE SELECTION PREDICTION WITH PHYSICAL MODEL AUGMENTED PSEUDO-LABELING

    公开(公告)号:WO2023069667A1

    公开(公告)日:2023-04-27

    申请号:PCT/US2022/047346

    申请日:2022-10-21

    Abstract: Systems and methods for predicting T-Cell receptor (TCR)-peptide interaction, including training a deep learning model for the prediction of TCR-peptide interaction by determining a multiple sequence alignment (MSA) for TCR-peptide pair sequences from a dataset of TCR-peptide pair sequences using a sequence analyzer, building TCR structures and peptide structures using the MSA and corresponding structures from a Protein Data Bank (PDB) using a MODELLER, and generating an extended TCR-peptide training dataset based on docking energy scores determined by docking peptides to TCRs using physical modeling based on the TCR structures and peptide structures built using the MODELLER. TCR-peptide pairs are classified and labeled as positive or negative pairs using pseudo-labels based on the docking energy scores, and the deep learning model is iteratively retrained based on the extended TCR-peptide training dataset and the pseudo- labels until convergence.

    HIERARCHICAL WORD EMBEDDING SYSTEM
    3.
    发明申请

    公开(公告)号:WO2022216935A1

    公开(公告)日:2022-10-13

    申请号:PCT/US2022/023840

    申请日:2022-04-07

    Abstract: Systems and methods for matching job descriptions with job applicants is provided. The method includes allocating each of one or more job applicants curriculum vitae (CV) into sections 320; applying max pooled word embedding 330 to each section of the job applicants CVs; using concatenated max-pooling and average-pooling 340 to compose the section embeddings into an applicants CV representation; allocating each of one or more job position descriptions into specified sections 220; applying max pooled word embedding 230 to each section of the job position descriptions; using concatenated max-pooling and average-pooling 240 to compose the section embeddings into a job representation; calculating a cosine similarity 250, 350 between each of the job representations and each of the CV representations to perform job-to-applicant matching; and presenting an ordered list of the one or more job applicants 360 or an ordered list of the one or more job position descriptions 260 to a user.

    KEYPOINT BASED ACTION LOCALIZATION
    4.
    发明申请

    公开(公告)号:WO2022165132A1

    公开(公告)日:2022-08-04

    申请号:PCT/US2022/014246

    申请日:2022-01-28

    Abstract: A computer-implemented method is provided for action localization. The method includes converting (510) one or more video frames into person keypoints and object keypoints. The method further includes embedding (520) position, timestamp, instance, and type information with the person keypoints and object keypoints to obtain keypoint embeddings. The method also includes predicting (530), by a hierarchical transformer encoder using the keypoint embeddings, human actions and bounding box information of when and where the human actions occur in the one or more video frames.

    LEARNING ORTHOGONAL FACTORIZATION IN GAN LATENT SPACE

    公开(公告)号:WO2022169681A1

    公开(公告)日:2022-08-11

    申请号:PCT/US2022/014211

    申请日:2022-01-28

    Abstract: A method for learning disentangled representations of videos is presented. The method includes feeding (1001) each frame of video data into an encoder to produce a sequence of visual features, passing (1003) the sequence of visual features through a deep convolutional network to obtain a posterior of a dynamic latent variable and a posterior of a static latent variable, sampling (1005) static and dynamic representations from the posterior of the static latent variable and the posterior of the dynamic latent variable, respectively, concatenating (1007) the static and dynamic representations to be fed into a decoder to generate reconstructed sequences, and applying (1009) three regularizes to the dynamic and static latent variables to trigger representation disentanglement. To facilitate the disentangled sequential representation learning, orthogonal factorization in generative adversarial network (GAN) latent space is leveraged to pre-train a generator as a decoder in the method.

    VIDEO CAPTURING DEVICE FOR PREDICTING SPECIAL DRIVING SITUATIONS
    7.
    发明申请
    VIDEO CAPTURING DEVICE FOR PREDICTING SPECIAL DRIVING SITUATIONS 审中-公开
    用于预测特殊驾驶情况的视频捕捉设备

    公开(公告)号:WO2017177008A1

    公开(公告)日:2017-10-12

    申请号:PCT/US2017/026365

    申请日:2017-04-06

    Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.

    Abstract translation: 提出了一种用于预测人在驾驶汽车时的驾驶状况的视频装置。 视频设备包括用于提取特征地图的多模态传感器和知识数据,利用训练数据训练的深度神经网络,以从汽车的角度识别实时交通场景(TS),以及用于显示的用户界面(UI) 实时TS。 实时TS与预定的TS进行比较以预测驾驶情况。 视频设备可以是摄像机。 摄像机可以安装在汽车的挡风玻璃上。 或者,摄像机可以集成到汽车的仪表板或控制台区域。 摄像机可以在实时TS内计算与其他车辆有关的速度,速度,类型和/或位置信息。 摄像机还可以包括警告指示器,例如发光二极管(LED),它们针对不同的驾驶情况发出不同的颜色。

    MULTI-MODAL DRIVING DANGER PREDICTION SYSTEM FOR AUTOMOBILES
    8.
    发明申请
    MULTI-MODAL DRIVING DANGER PREDICTION SYSTEM FOR AUTOMOBILES 审中-公开
    汽车多模态驱动危险预测系统

    公开(公告)号:WO2017177005A1

    公开(公告)日:2017-10-12

    申请号:PCT/US2017/026362

    申请日:2017-04-06

    Abstract: A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.

    Abstract translation: 提出了一种用于训练深度神经网络以识别来自多模式传感器和知识数据的交通场景(TS)的计算机实现的方法。 该计算机实现的方法包括通过使用流量参与者(TS)提取器从多模式传感器和知识数据接收数据并且从多模式传感器和知识数据中提取特征映射以生成第一组数据 一个静态对象提取器来生成第二组数据,并使用一个附加信息提取器。 计算机实现的方法还包括训练具有训练数据的深度神经网络以从车辆的角度识别TS。

    GENERATING MINORITY-CLASS EXAMPLES FOR TRAINING DATA

    公开(公告)号:WO2022216591A1

    公开(公告)日:2022-10-13

    申请号:PCT/US2022/023280

    申请日:2022-04-04

    Abstract: Methods and systems for training a model include encoding (203) training peptide sequences using an encoder model. A new peptide sequence is generated (202) using a generator model. The encoder model, the generator model, and the discriminator model are trained (206) to cause the generator model to generate new peptides that the discriminator mistakes for the training peptide sequences, including learning projection vectors with respective cross-entropy losses for binding sequences and non-binding sequences.

Patent Agency Ranking