-
21.
公开(公告)号:US20160078303A1
公开(公告)日:2016-03-17
申请号:US14835080
申请日:2015-08-25
Applicant: SRI International
Inventor: Supun Samarasekera , Rakesh Kumar , Taragay Oskiper , Zhiwei Zhu , Oleg Naroditsky , Harpreet Sawhney
CPC classification number: G06K9/00791 , B60R1/00 , G01C21/005 , G01C21/165 , G06K9/3216 , G06K9/4642 , G06K9/52 , G06K9/6215 , G06T7/74 , H04N13/239 , H04N2013/0081
Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
Abstract translation: 一种用于使用由多个摄像机捕获的视频信息有效地定位目标场景中的感兴趣对象的系统和方法。 该系统和方法提供多摄像机视觉测距,其中由多摄像机配置中的所有摄像机为每个摄像机生成姿态估计。 此外,系统和方法可以使用多摄像机配置中的任何摄像机来定位和识别目标场景中的突出地标,并将识别的地标与先前识别的地标的数据库进行比较。 另外,该系统和方法提供了基于视频的姿势估计与由一个或多个二次测量传感器捕获的位置测量数据的集成,例如惯性测量单元(IMU)和全球定位系统(GPS)单元 。
-
22.
公开(公告)号:US20150269438A1
公开(公告)日:2015-09-24
申请号:US14575495
申请日:2014-12-18
Applicant: SRI International
Inventor: Supun Samarasekera , Raia Hadsell , Rakesh Kumar , Harpreet S. Sawhney , Bogdan C. Matei , Ryan Villamil
CPC classification number: G08G5/0069 , G01C11/02 , G01C21/32 , G01C21/3673 , G06K9/00637 , G06K9/6267 , G06K9/6293 , G06T17/05 , G06T2200/04 , G06T2207/10012 , G08G5/0004 , G08G5/003 , G08G5/0073
Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
-
公开(公告)号:US11423586B2
公开(公告)日:2022-08-23
申请号:US17157065
申请日:2021-01-25
Applicant: SRI International
Inventor: Supun Samarasekera , Taragay Oskiper , Rakesh Kumar , Mikhail Sizintsev , Vlad Branzoi
Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
-
公开(公告)号:US11313684B2
公开(公告)日:2022-04-26
申请号:US16089322
申请日:2017-03-28
Applicant: SRI INTERNATIONAL
Inventor: Han-Pang Chiu , Supun Samarasekera , Rakesh Kumar , Mikhail Sizintsev , Xun Zhou , Philip Miller , Glenn Murray
Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information. If at least one of the extracted features matches a geo-referenced visual feature, a pose is determined for the platform device using location information associated with the matched, geo-referenced visual feature and relative motion information between consecutive frames.
-
公开(公告)号:US20200184718A1
公开(公告)日:2020-06-11
申请号:US16523313
申请日:2019-07-26
Applicant: SRI International
Inventor: Han-Pang Chiu , Supun Samarasekera , Rakesh Kumar , Bogdan C. Matei , Bhaskar Ramamurthy
Abstract: A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.
-
公开(公告)号:US09872968B2
公开(公告)日:2018-01-23
申请号:US14251024
申请日:2014-04-11
Applicant: SRI International
Inventor: Massimiliano de Zambotti , Ian M. Colrain , Fiona C. Baker , Rakesh Kumar , Mikhail Sizintsev , Supun Samarasekera , Glenn A. Murray
IPC: A61M21/02 , A61B5/0205 , A61B5/0476 , A61B5/00 , A61B5/0482 , G06F3/01 , A61B5/0488 , A61M21/00 , A61B5/01 , A61B5/024 , A61B5/08 , A61B5/11
CPC classification number: A61M21/02 , A61B5/01 , A61B5/015 , A61B5/0205 , A61B5/02055 , A61B5/02438 , A61B5/0476 , A61B5/0482 , A61B5/0488 , A61B5/0816 , A61B5/11 , A61B5/486 , A61B5/6803 , A61B5/6804 , A61B5/7475 , A61M2021/0027 , A61M2021/005 , A61M2230/005 , A61M2230/06 , A61M2230/10 , A61M2230/42 , A61M2230/50 , A61M2230/60 , G06F3/011 , G06F19/00 , G06F2203/011 , G06F2203/013 , G06F2203/015
Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
-
公开(公告)号:US20170193710A1
公开(公告)日:2017-07-06
申请号:US15465530
申请日:2017-03-21
Applicant: SRI International
Inventor: Rakesh Kumar , Targay Oskiper , Oleg Naroditsky , Supun Samarasekera , Zhiwei Zhu , Janet Kim
IPC: G06T19/00 , G06F3/0346 , G06F3/01
CPC classification number: G06T19/006 , G06F3/012 , G06F3/0346
Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
-
公开(公告)号:US20140316192A1
公开(公告)日:2014-10-23
申请号:US14254348
申请日:2014-04-16
Applicant: SRI International
Inventor: Massimiliano de Zambotti , Ian M. Colrain , Fiona C. Baker , Rakesh Kumar , Mikhail Sizintsev , Supun Samarasekera , Glenn A. Murray
IPC: A61M21/02
CPC classification number: A61M21/02 , A61B5/01 , A61B5/015 , A61B5/0205 , A61B5/02055 , A61B5/02438 , A61B5/0476 , A61B5/0482 , A61B5/0488 , A61B5/0816 , A61B5/11 , A61B5/486 , A61B5/6803 , A61B5/6804 , A61B5/7475 , A61M2021/0027 , A61M2021/005 , A61M2230/005 , A61M2230/06 , A61M2230/10 , A61M2230/42 , A61M2230/50 , A61M2230/60 , G06F3/011 , G06F2203/011 , G06F2203/013 , G06F2203/015
Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
Abstract translation: 生物反馈虚拟现实睡眠辅助技术监测一个或多个生理参数,同时呈现身临其境的环境。 沉浸式环境的呈现随着时间的推移而改变,以响应生理参数值的变化。 沉浸式环境演示的变化使用生物反馈技术进行配置,旨在促进睡眠。
-
公开(公告)号:US11960994B2
公开(公告)日:2024-04-16
申请号:US17151506
申请日:2021-01-18
Applicant: SRI International
Inventor: Han-Pang Chiu , Jonathan D. Brookshire , Zachary Seymour , Niluthpol C. Mithun , Supun Samarasekera , Rakesh Kumar , Qiao Wang
Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
-
公开(公告)号:US10810734B2
公开(公告)日:2020-10-20
申请号:US16457156
申请日:2019-06-28
Applicant: SRI International , Obayashi Corporation
Inventor: Garbis Salgian , Bogdan C. Matei , Matthieu Henri Lecce , Abhinav Rajvanshi , Supun Samarasekera , Rakesh Kumar , Tamaki Horii , Yuichi Ikeda , Hidefumi Takenaka
Abstract: Embodiments of the present invention generally relate to computer aided rebar measurement and inspection systems. In some embodiments, the system may include a data acquisition system configured to obtain fine-level rebar measurements, images or videos of rebar structures, a 3D point cloud model generation system configured to generate a 3D point cloud model representation of the rebar structure from information acquired by the data acquisition system, a rebar detection system configured to detect rebar within the 3D point cloud model generated or the rebar images or videos of the rebar structures, a rebar measurement system to measure features of the rebar and rebar structures detected by the rebar detection system, and a discrepancy detection system configured to compare the measured features of the rebar structures detected by the rebar detection system with a 3D Building Information Model (BIM) of the rebar structures, and determine any discrepancies between them.
-
-
-
-
-
-
-
-
-