Optimizing Inference Performance for Conformer

    公开(公告)号:US20230130634A1

    公开(公告)日:2023-04-27

    申请号:US17936547

    申请日:2022-09-29

    Applicant: Google LLC

    Abstract: A computer-implemented method includes receiving a sequence of acoustic frames as input to an automatic speech recognition (ASR) model. Here, the ASR model includes a causal encoder and a decoder. The method also includes generating, by the causal encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by the decoder, a first probability distribution over possible speech recognition hypotheses. Here, the causal encoder includes a stack of causal encoder layers each including a Recurrent Neural Network (RNN) Attention-Performer module that applies linear attention.

    Fast Emit Low-latency Streaming ASR with Sequence-level Emission Regularization

    公开(公告)号:US20220122586A1

    公开(公告)日:2022-04-21

    申请号:US17447285

    申请日:2021-09-09

    Applicant: Google LLC

    Abstract: A computer-implemented method of training a streaming speech recognition model that includes receiving, as input to the streaming speech recognition model, a sequence of acoustic frames. The streaming speech recognition model is configured to learn an alignment probability between the sequence of acoustic frames and an output sequence of vocabulary tokens. The vocabulary tokens include a plurality of label tokens and a blank token. At each output step, the method includes determining a first probability of emitting one of the label tokens and determining a second probability of emitting the blank token. The method also includes generating the alignment probability at a sequence level based on the first probability and the second probability. The method also includes applying a tuning parameter to the alignment probability at the sequence level to maximize the first probability of emitting one of the label tokens.

    Relative margin for contrastive learning

    公开(公告)号:US12282857B1

    公开(公告)日:2025-04-22

    申请号:US18900506

    申请日:2024-09-27

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks through contrastive learning. In particular, the contrastive learning is modified to use a relative margin to adjust a training pair's contribution to optimization.

    Systems and Methods for Pretraining Image Processing Models

    公开(公告)号:US20230281400A1

    公开(公告)日:2023-09-07

    申请号:US17685774

    申请日:2022-03-03

    Applicant: Google LLC

    CPC classification number: G06F40/58 G06F40/284 G06V10/766 G06V30/10

    Abstract: Example embodiments of the present disclosure relate to systems and methods for pretraining image-processing models on weakly-supervised image-text pairs. The pretraining can include receiving a training sequence for the machine-learned image-processing model. The training sequence can include text tokens and image tokens. A prefix sequence can contain the image tokens. A remainder sequence can include a remainder set of the text tokens. The pretraining can include determining, using the prefix sequence as an input to the machine-learned image-processing model, an objective based on recovery of the remainder sequence. The pretraining can include updating one or more learnable parameters of the machine-learned image-processing model based on the objective.

    VIDEO-TEXT MODELING WITH ZERO-SHOT TRANSFER FROM CONTRASTIVE CAPTIONERS

    公开(公告)号:US20250124708A1

    公开(公告)日:2025-04-17

    申请号:US18694604

    申请日:2023-12-08

    Applicant: Google LLC

    Abstract: Provided is an efficient approach to establish a foundational video-text model for tasks including open-vocabulary video classification, text-to-video retrieval, video captioning and video question-answering. Some example implementations include a model which can be referred to as VideoCoCa. Example implementations reuse a pretrained image-text contrastive captioner (CoCa) model and adapt it to video-text tasks with little or minimal extra training. While previous works adapt image-text models with various cross-frame fusion modules (for example, cross-frame attention layer or perceiver resampler) and finetune the modified architecture on video-text data, aspects of the present disclosure leverage findings that the generative attentional pooling and contrastive attentional pooling layers in the image-text CoCa design are instantly adaptable to “flattened frame embeddings”, yielding a strong zero-shot transfer baseline for many video-text tasks.

    RELATIVE MARGIN FOR CONTRASTIVE LEARNING

    公开(公告)号:US20250111235A1

    公开(公告)日:2025-04-03

    申请号:US18900506

    申请日:2024-09-27

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks through contrastive learning. In particular, the contrastive learning is modified to use a relative margin to adjust a training pair's contribution to optimization.

Patent Agency Ranking