-
公开(公告)号:US20230367772A1
公开(公告)日:2023-11-16
申请号:US17741811
申请日:2022-05-11
Applicant: Adobe Inc.
Inventor: Subrata Mitra , Yash Gadhia , Tong Yu , Shaddy Garg , Nikhil Sheoran , Arjun Kashettiwar , Anjali Yadav
IPC: G06F16/2455 , G06F16/2457 , G06F16/2458 , G06F16/2453 , G06K9/62
CPC classification number: G06F16/2455 , G06F16/2457 , G06F16/2474 , G06F16/24542 , G06K9/6262
Abstract: Some techniques described herein relate to utilizing a machine-learning (ML) model to select respective samples for queries of a query sequence. In one example, a method includes receiving a query in a query sequence, where the query is directed toward a dataset. Samples are available as down-sampled versions of the dataset. The method further include applying an agent to select, for the query, a sample from among the samples of the dataset. The agent includes an ML model trained, such as via intent-based reinforcement learning, to select respective samples for queries. The query is then executed against the sample to output a response.
-
公开(公告)号:US20250103813A1
公开(公告)日:2025-03-27
申请号:US18472746
申请日:2023-09-22
Applicant: Adobe Inc.
Inventor: Ruiyi Zhang , Zhendong Chu , Vlad Morariu , Tong Yu , Rajiv Jain , Nedim Lipka , Jiuxiang Gu
IPC: G06F40/295
Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that train a named entity recognition (NER) model with noisy training data through a self-cleaning discriminator model. For example, the disclosed systems utilize a self-cleaning guided denoising framework to improve NER learning on noisy training data via a guidance training set. In one or more implementations, the disclosed systems utilize, within the denoising framework, an auxiliary discriminator model to correct noise in the noisy training data while training an NER model through the noisy training data. For example, while training the NER model to predict labels from the noisy training data, the disclosed systems utilize a discriminator model to detect noisy NER labels and reweight the noisy NER labels provided for training in the NER model.
-
公开(公告)号:US20250077549A1
公开(公告)日:2025-03-06
申请号:US18459081
申请日:2023-08-31
Applicant: Adobe Inc.
Inventor: William Brandon GEORGE , Wei Zhang , Tyler Rasmussen , Tung Mai , Tong Yu , Sungchul Kim , Shunan Guo , Samuel Nephi Grigg , Said Kobeissi , Ryan Rossi , Ritwik Sinha , Eunyee Koh , Prithvi Bhutani , Jordan Henson Walker , Abhisek Trivedi
IPC: G06F16/28 , G06F16/242 , G06F40/205 , G06F40/40
Abstract: Graphic visualizations, such as charts or graphs conveying data attribute values, can be generated based on natural language queries, i.e., natural language requests. To do so, a natural language request is parsed into n-grams, and from the n-grams, word embeddings are determined using a natural language model. Data attributes for the graphic visualization are discovered in the vector space from the word embeddings. The type of graphic visualization can be determined based on a request intent, which is determined using a trained intent classifier. The graphic visualization is generated to include the data attribute values of the discovered data attributes, and in accordance with the graphic visualization type.
-
公开(公告)号:US20250013866A1
公开(公告)日:2025-01-09
申请号:US18347877
申请日:2023-07-06
Applicant: ADOBE INC.
Inventor: Handong Zhao , Yue Bai , Zhe Lin , Ajinkya Gorakhnath Kale , Jiuxiang Gu , Tong Yu , Sungchul Kim
Abstract: Systems and methods for reducing inference time of vision-language models, as well as for multimodal search, are described herein. Embodiments are configured to obtain an embedding neural network. The embedding neural network is pretrained to embed inputs from a plurality of modalities into a multimodal embedding space. Embodiments are further configured to perform a first progressive pruning stage, where the first progressive pruning stage includes a first pruning of the embedding neural network and a first fine-tuning of the embedding neural network. Embodiments then perform a second progressive pruning stage based on an output of the first progressive pruning stage, where the second progressive pruning stage includes a second pruning of the embedding neural network and a second fine-tuning of the embedding neural network.
-
公开(公告)号:US20240311221A1
公开(公告)日:2024-09-19
申请号:US18120773
申请日:2023-03-13
Applicant: Adobe Inc.
Inventor: Jaeho Bang , Sungchul Kim , Ryan A. Rossi , Tong Yu , Handong Zhao
CPC classification number: G06F11/0769 , G06F11/0778 , G06N20/00
Abstract: In implementations of systems for detection and interpretation of log anomalies, a computing device implements an anomaly system to receive input data describing a two-dimensional representation of log templates and timestamps. The anomaly system processes the input data using a machine learning model trained on training data to detect anomalies in two-dimensional representations of log templates and timestamps. A log anomaly is detected in the two-dimensional representation using the machine learning model based on processing the input data. The anomaly system generates an indication of an interpretation of the log anomaly for display in a user interface based on a log template included in the two-dimensional representation.
-
16.
公开(公告)号:US20230230198A1
公开(公告)日:2023-07-20
申请号:US17576091
申请日:2022-01-14
Applicant: Adobe Inc.
Inventor: Ruiyi Zhang , Yufan Zhou , Christopher Tensmeyer , Jiuxiang Gu , Tong Yu , Tong Sun
CPC classification number: G06T3/0056 , G06T11/00 , G10L15/22 , G10L15/26 , G06N3/04 , G10L2015/223
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a neural network framework for interactive multi-round image generation from natural language inputs. Specifically, the disclosed systems provide an intelligent framework (i.e., a text-based interactive image generation model) that facilitates a multi-round image generation and editing workflow that comports with arbitrary input text and synchronous interaction. In particular embodiments, the disclosed systems utilize natural language feedback for conditioning a generative neural network that performs text-to-image generation and text-guided image modification. For example, the disclosed systems utilize a trained model to inject textual features from natural language feedback into a unified joint embedding space for generating text-informed style vectors. In turn, the disclosed systems can generate an image with semantically meaningful features that map to the natural language feedback. Moreover, the disclosed systems can persist these semantically meaningful features throughout a refinement process and across generated images.
-
公开(公告)号:US20250028751A1
公开(公告)日:2025-01-23
申请号:US18355901
申请日:2023-07-20
Applicant: Adobe Inc.
Inventor: Tong Yu , Kaige Xie , Haoliang Wang , Junda Wu , Handong Zhao , Ruiyi Zhang , Kanak Vivek Mahadik , Ani Nenkova
Abstract: Dialogue skeleton assisted prompt transfer for dialogue summarization techniques are described that support training of a language model to perform dialogue summarization in a few-shot scenario. A processing device, for instance, receives a training dataset that includes training dialogues. The processing device then generates dialogue skeletons based on the training dialogues using one or more perturbation-based probes. The processing device trains a language model using prompt transfer between a source task, e.g., dialogue state tracking, and a target task, e.g., dialogue summarization, using the dialogue skeletons as supervision. The processing device then receives an input dialogue and uses the trained language model to generate a summary of the input dialogue.
-
公开(公告)号:US12079217B2
公开(公告)日:2024-09-03
申请号:US17741811
申请日:2022-05-11
Applicant: Adobe Inc.
Inventor: Subrata Mitra , Yash Gadhia , Tong Yu , Shaddy Garg , Nikhil Sheoran , Arjun Kashettiwar , Anjali Yadav
IPC: G06F16/2455 , G06F16/2453 , G06F16/2457 , G06F16/2458 , G06F18/21
CPC classification number: G06F16/2455 , G06F16/24542 , G06F16/2457 , G06F16/2474 , G06F18/217
Abstract: Some techniques described herein relate to utilizing a machine-learning (ML) model to select respective samples for queries of a query sequence. In one example, a method includes receiving a query in a query sequence, where the query is directed toward a dataset. Samples are available as down-sampled versions of the dataset. The method further include applying an agent to select, for the query, a sample from among the samples of the dataset. The agent includes an ML model trained, such as via intent-based reinforcement learning, to select respective samples for queries. The query is then executed against the sample to output a response.
-
公开(公告)号:US20240152771A1
公开(公告)日:2024-05-09
申请号:US17979843
申请日:2022-11-03
Applicant: Adobe Inc.
Inventor: Can Qin , Sungchul Kim , Tong Yu , Ryan A. Rossi , Handong Zhao
IPC: G06N5/02
CPC classification number: G06N5/02
Abstract: Tabular data machine-learning model techniques and systems are described. In one example, common-sense knowledge is infused into training data through use of a knowledge graph to provide external knowledge to supplement a tabular data corpus. In another example, a dual-path architecture is employed to configure an adapter module. In an implementation, the adapter module is added as part of a pre-trained machine-learning model for general purpose tabular models. Specifically, dual-path adapters are trained using the knowledge graphs and semantically augmented trained data. A path-wise attention layer is applied to fuse a cross-modality representation of the two paths for a final result.
-
公开(公告)号:US20230143721A1
公开(公告)日:2023-05-11
申请号:US17524282
申请日:2021-11-11
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Subrata Mitra , Ruiyi Zhang , Rui Wang , Handong Zhao , Tong Yu
IPC: G06F40/295 , G06N20/00
CPC classification number: G06F40/295 , G06N20/00
Abstract: Embodiments of the technology described herein describe a machine classifier capable of continually learning new classes through a continual few-shot learning approach. A natural language processing (NLP) machine classifier may initially be trained to identify a plurality of other classes through a conventional training process. In order to learn a new class, natural-language training data for a new class is generated. The training data for the new class may be few-shot training data. The training also uses synthetic training data that represents each of the plurality of other classes. The synthetic training data may be generated through a model inversion of the original classifier. The synthetic training data and the natural-language training data are used to retrain the NLP classifier to identify text in the plurality of other classes and the new class using.
-
-
-
-
-
-
-
-
-