-
公开(公告)号:US11989977B2
公开(公告)日:2024-05-21
申请号:US17363365
申请日:2021-06-30
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Tianyi Wang , Xun Qian
CPC classification number: G06V40/28 , G06F3/011 , G06T7/215 , G06T13/40 , G06T19/006 , G06V20/20 , G06V40/103 , G06V40/23
Abstract: A system and method for authoring and implementing context-aware applications (CAPs) are disclosed. The system and method enables users to record their daily activities and then build and deploy customized CAPs onto augmented reality platforms in which automated actions are performed in response to user-defined human actions. The system and method utilizes an integrated augmented reality platform composed of multiple camera systems, which allows for non-intrusive recording of end-users' activities and context detection while authoring and implementing CAPs. The system and method provides an augmented reality authoring interface for browsing, selecting, and editing recorded activities, and creating flexible CAPs through spatial interaction and visual programming.
-
公开(公告)号:US20230038709A1
公开(公告)日:2023-02-09
申请号:US17814965
申请日:2022-07-26
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Tianyi Wang , Xun Qian , Fengming He
Abstract: An augmented reality (AR) application authoring system is disclosed. The AR application authoring system enables the real-time creation of freehand interactive AR applications with freehand inputs. The AR application authoring system enables intuitive authoring of customized freehand gesture inputs through embodied demonstration while using the surrounding environment as a contextual reference. A visual programming interface is provided with which users can define freehand interactions by matching the freehand gestures with reactions of virtual AR assets. Thus, users can create personalized freehand interactions through simple trigger-action programming logic. Further, with the support of a real-time hand gesture detection algorithm, users can seamlessly test and iterate on the authored AR experience.
-
公开(公告)号:US20210406528A1
公开(公告)日:2021-12-30
申请号:US17363365
申请日:2021-06-30
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Tianyi Wang , Xun Qian
Abstract: A system and method for authoring and implementing context-aware applications (CAPs) are disclosed. The system and method enables users to record their daily activities and then build and deploy customized CAPs onto augmented reality platforms in which automated actions are performed in response to user-defined human actions. The system and method utilizes an integrated augmented reality platform composed of multiple camera systems, which allows for non-intrusive recording of end-users' activities and context detection while authoring and implementing CAPs. The system and method provides an augmented reality authoring interface for browsing, selecting, and editing recorded activities, and creating flexible CAPs through spatial interaction and visual programming.
-
公开(公告)号:US12153737B2
公开(公告)日:2024-11-26
申请号:US17814965
申请日:2022-07-26
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Tianyi Wang , Xun Qian , Fengming He
Abstract: An augmented reality (AR) application authoring system is disclosed. The AR application authoring system enables the real-time creation of freehand interactive AR applications with freehand inputs. The AR application authoring system enables intuitive authoring of customized freehand gesture inputs through embodied demonstration while using the surrounding environment as a contextual reference. A visual programming interface is provided with which users can define freehand interactions by matching the freehand gestures with reactions of virtual AR assets. Thus, users can create personalized freehand interactions through simple trigger-action programming logic. Further, with the support of a real-time hand gesture detection algorithm, users can seamlessly test and iterate on the authored AR experience.
-
公开(公告)号:US20240312154A1
公开(公告)日:2024-09-19
申请号:US18584258
申请日:2024-02-22
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Fengming He , Xun Qian , Jingyu Shi , Xiyun Hu
CPC classification number: G06T19/006 , G06T13/20 , G06T19/20 , G06V40/28 , G06T2219/2004 , G06T2219/2012 , G06T2219/2016 , G06T2219/2021 , G06V2201/07
Abstract: An Tangible User Interface (TUI) authoring system is disclosed that allows end-users to customize edges on daily objects as TUI inputs to control varied digital functions. The TUI application authoring system incorporates an integrated AR-device and an integrated vision-based detection pipeline that can track 3D edges and detect the touch interaction between fingers and edges. Leveraging the spatial-awareness of AR, users can simply select an edge by sliding fingers along it and then make the edge interactive by connecting it to various digital functions. The system is demonstrated using four exemplary use cases including multi-function controllers, smart homes, games, and TUI-based tutorials.
-
公开(公告)号:US20220139254A1
公开(公告)日:2022-05-05
申请号:US17517949
申请日:2021-11-03
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Gaoping Huang , Alexander J. Quinn , Yuanzhi Cao , Tianyi Wang , Xun Qian
Abstract: A machine task tutorial system is disclosed that utilizes augmented reality to enable an expert user to record a tutorial for a machine task that can be learned by different trainee users in an adaptive manner. The machine task tutorial system advantageously utilizes an adaptation model that focuses on spatial and bodily visual presence for machine task tutoring. The machine task tutorial system advantageously enables adaptive tutoring in the recorded-tutorial environment based on machine state and user activity recognition. The machine task tutorial system advantageously utilizes AR to provide tutorial recording, adaptive visualization, and state recognition. In this way, the machine task tutorial system supports more effective apprenticeship and training for machine tasks in workshops or factories.
-
公开(公告)号:US20240273949A1
公开(公告)日:2024-08-15
申请号:US18635112
申请日:2024-04-15
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Tianyi Wang , Xun Qian
CPC classification number: G06V40/28 , G06F3/011 , G06T7/215 , G06T13/40 , G06T19/006 , G06V20/20 , G06V40/103 , G06V40/23
Abstract: A system and method for authoring and implementing context-aware applications (CAPs) are disclosed. The system and method enables users to record their daily activities and then build and deploy customized CAPs onto augmented reality platforms in which automated actions are performed in response to user-defined human actions. The system and method utilizes an integrated augmented reality platform composed of multiple camera systems, which allows for non-intrusive recording of end-users' activities and context detection while authoring and implementing CAPs. The system and method provides an augmented reality authoring interface for browsing, selecting, and editing recorded activities, and creating flexible CAPs through spatial interaction and visual programming.
-
8.
公开(公告)号:US20240118786A1
公开(公告)日:2024-04-11
申请号:US18480134
申请日:2023-10-03
Applicant: Purdue Research Foundation
Inventor: Karthik Ramani , Xun Qian , Tianyi Wang , Fengming He
IPC: G06F3/04815 , G06F3/01 , G06F3/04842 , G06F3/04845 , G06T7/194 , G06T7/73
CPC classification number: G06F3/04815 , G06F3/011 , G06F3/017 , G06F3/04842 , G06F3/04845 , G06T7/194 , G06T7/74 , G06T2207/20081 , G06T2207/30196
Abstract: A method and system for hand-object interaction dataset collection is described herein, which is configured to support user-specified collection of hand-object interaction datasets. Such hand-object interaction datasets are useful, for example, for training 3D hand and object pose estimation model. The method and system adopt a sequential process of first recording hand and object pose labels by manipulating a virtual bounding box, rather than a physical object. Naturally, hand-object occlusions do not occur during the manipulation of the virtual bounding box, so these labels are provided with high accuracy. Subsequently, the images are separately captured of the hand-object interaction with the physical object. These images are paired with the previously recorded hand and object pose labels.
-
-
-
-
-
-
-