-
公开(公告)号:US20210192972A1
公开(公告)日:2021-06-24
申请号:US17129541
申请日:2020-12-21
Applicant: SRI International
Inventor: Girish Acharya , Louise Yarnall , Anirban Roy , Michael Wessel , Yi Yao , John J. Byrnes , Dayne Freitag , Zachary Weiler , Paul Kalmar
Abstract: This disclosure describes machine learning techniques for capturing human knowledge for performing a task. In one example, a video device obtains video data of a first user performing the task and one or more sensors generate sensor data during performance of the task. An audio device obtains audio data describing performance of the task. A computation engine applies a machine learning system to correlate the video data to the audio data and sensor data to identify portions of the video, sensor, and audio data that depict a same step of a plurality of steps for performing the task. The machine learning system further processes the correlated data to update a domain model defining performance of the task. A training unit applies the domain model to generate training information for performing the task. An output device outputs the training information for use in training a second user to perform the task.
-
公开(公告)号:US20170160813A1
公开(公告)日:2017-06-08
申请号:US15332494
申请日:2016-10-24
Applicant: SRI International
Inventor: Ajay Divakaran , Amir Tamrakar , Girish Acharya , William Mark , Greg Ho , Jihua Huang , David Salter , Edgar Kalns , Michael Wessel , Min Yin , James Carpenter , Brent Mombourquette , Kenneth Nitz , Elizabeth Shriberg , Eric Law , Michael Frandsen , Hyong-Gyun Kim , Cory Albright , Andreas Tsiartas
IPC: G06F3/01 , G06F3/00 , G06F3/16 , G06N99/00 , G10L25/63 , G10L15/22 , G10L15/06 , G10L15/02 , G06K9/00 , G10L15/18
CPC classification number: G06F3/017 , G06F3/0304 , G06F3/167 , G06K9/00221 , G06K9/00335 , G06N3/006 , G06N5/022 , G06N7/005 , G06N20/00 , G10L15/1815 , G10L15/1822 , G10L15/22 , G10L25/63 , G10L2015/228
Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
-
公开(公告)号:US12118773B2
公开(公告)日:2024-10-15
申请号:US17129541
申请日:2020-12-21
Applicant: SRI International
Inventor: Girish Acharya , Louise Yarnall , Anirban Roy , Michael Wessel , Yi Yao , John J. Byrnes , Dayne Freitag , Zachary Weiler , Paul Kalmar
IPC: G06V10/82 , G06F18/22 , G06N20/00 , G06V20/20 , G06V20/40 , G06V30/19 , G06V30/262 , G06V40/10 , G06V40/20 , G09B5/06 , G09B19/00 , G10L15/18 , G10L25/57
CPC classification number: G06V10/82 , G06F18/22 , G06N20/00 , G06V20/20 , G06V20/41 , G06V30/19173 , G06V30/274 , G06V40/10 , G06V40/113 , G06V40/28 , G09B5/065 , G09B19/003 , G10L15/1815 , G10L25/57
Abstract: This disclosure describes machine learning techniques for capturing human knowledge for performing a task. In one example, a video device obtains video data of a first user performing the task and one or more sensors generate sensor data during performance of the task. An audio device obtains audio data describing performance of the task. A computation engine applies a machine learning system to correlate the video data to the audio data and sensor data to identify portions of the video, sensor, and audio data that depict a same step of a plurality of steps for performing the task. The machine learning system further processes the correlated data to update a domain model defining performance of the task. A training unit applies the domain model to generate training information for performing the task. An output device outputs the training information for use in training a second user to perform the task.
-
公开(公告)号:US12282606B2
公开(公告)日:2025-04-22
申请号:US17107958
申请日:2020-12-01
Applicant: SRI International
Inventor: Ajay Divakaran , Amir Tamrakar , Girish Acharya , William Mark , Greg Ho , Jihua Huang , David Salter , Edgar Kalns , Michael Wessel , Min Yin , James Carpenter , Brent Mombourquette , Kenneth Nitz , Elizabeth Shriberg , Eric Law , Michael Frandsen , Hyong-Gyun Kim , Cory Albright , Andreas Tsiartas
IPC: G06F3/03 , G06F3/01 , G06F3/16 , G06N3/006 , G06N5/022 , G06N20/00 , G06N20/10 , G06V40/16 , G06V40/20 , G10L15/18 , G10L15/22 , G10L25/63 , G06N7/01
Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
-
公开(公告)号:US20220310079A1
公开(公告)日:2022-09-29
申请号:US17613373
申请日:2020-06-15
Applicant: SRI International
Inventor: Edgar T. Kalns , Dimitra Vergyi , Girish Acharya , Andreas Kathol , Leonor Almada , Hyong-Gyun Kim , Nikoletta Baslou , Michael Wessel , Aaron Spaulding , Roland Heusser , James F. Carpenter , Min Yin
Abstract: A conversational assistant for conversational engagement platform can contain various modules including a user-model augmentation module, a dialogue management module, and a user-state analysis input/output module. The dialogue management module receives metrics tied to a user from the other modules to understand a current topic and a user's emotions regarding the current topic from the user-state analysis input/output module and then adapts dialogue from the dialogue management module to the user based on dialogue rules factoring in these different metrics. The dialogue rules also factors in both i) a duration of a conversational engagement with the user and ii) an attempt to maintain a positive experience for the user with the conversational engagement. A flexible ontology relationship representation about the user is built and stores learned metrics about the user over time with each conversational engagement, and then in combination with the dialogue rules, drives the conversations with the user.
-
公开(公告)号:US20210081056A1
公开(公告)日:2021-03-18
申请号:US17107958
申请日:2020-12-01
Applicant: SRI International
Inventor: Ajay Divakaran , Amir Tamrakar , Girish Acharya , William Mark , Greg Ho , Jihua Huang , David Salter , Edgar Kalns , Michael Wessel , Min Yin , James Carpenter , Brent Mombourquette , Kenneth Nitz , Elizabeth Shriberg , Eric Law , Michael Frandsen , Hyong-Gyun Kim , Cory Albright , Andreas Tsiartas
IPC: G06F3/01 , G06K9/00 , G06F3/16 , G10L15/18 , G10L25/63 , G10L15/22 , G06N20/00 , G06F3/03 , G06N5/02 , G06N3/00 , G06N20/10
Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
-
公开(公告)号:US10884503B2
公开(公告)日:2021-01-05
申请号:US15332494
申请日:2016-10-24
Applicant: SRI International
Inventor: Ajay Divakaran , Amir Tamrakar , Girish Acharya , William Mark , Greg Ho , Jihua Huang , David Salter , Edgar Kalns , Michael Wessel , Min Yin , James Carpenter , Brent Mombourquette , Kenneth Nitz , Elizabeth Shriberg , Eric Law , Michael Frandsen , Hyong-Gyun Kim , Cory Albright , Andreas Tsiartas
IPC: G06F3/03 , G06F3/01 , G06K9/00 , G06F3/16 , G10L15/18 , G10L25/63 , G10L15/22 , G06N20/00 , G06N5/02 , G06N3/00 , G06N7/00
Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
-
-
-
-
-
-