-
公开(公告)号:US20210041878A1
公开(公告)日:2021-02-11
申请号:US16661062
申请日:2019-10-23
Applicant: Boston Dynamics, Inc.
Inventor: Samuel Seifert , Marco da Silva , Alexander Rice , Leland Hepler , Mario Bollini , Christopher Bentzel
Abstract: A method for controlling a robot includes receiving image data from at least one image sensor. The image data corresponds to an environment about the robot. The method also includes executing a graphical user interface configured to display a scene of the environment based on the image data and receive an input indication indicating selection of a pixel location within the scene. The method also includes determining a pointing vector based on the selection of the pixel location. The pointing vector represents a direction of travel for navigating the robot in the environment. The method also includes transmitting a waypoint command to the robot. The waypoint command when received by the robot causes the robot to navigate to a target location. The target location is based on an intersection between the pointing vector and a terrain estimate of the robot.
-
公开(公告)号:US20240192695A1
公开(公告)日:2024-06-13
申请号:US18531152
申请日:2023-12-06
Applicant: Boston Dynamics, Inc.
Inventor: Matthew Jacob Klingensmith , Dom Jonak , Leland Hepler , Christopher Basmajian , Brian Ringley
CPC classification number: G05D1/222 , B62D57/032 , G05D1/2297 , G05D1/2462 , G06T3/40 , G06T11/206 , G05D2109/12 , G06T2200/24 , G06T2210/56
Abstract: Systems and methods are described for the display of a transformed virtual representation of sensor data overlaid on a site model. A system can obtain a site model identifying a site. For example, the site model can include a map, a blueprint, or a graph. The system can obtain sensor data from a sensor of a robot. The sensor data can include route data identifying route waypoints and/or route edges associated with the robot. The system can receive input identifying an association between a virtual representation of the sensor data and the site model. Based on the association, the system can transform the virtual representation of the sensor data and instruct display of the transformed data overlaid on the site model.
-
公开(公告)号:US20210318687A1
公开(公告)日:2021-10-14
申请号:US16884954
申请日:2020-05-27
Applicant: Boston Dynamics, Inc.
Inventor: Samuel Seifert , Leland Hepler
Abstract: A method for online authoring of robot autonomy applications includes receiving sensor data of an environment about a robot while the robot traverses through the environment. The method also includes generating an environmental map representative of the environment about the robot based on the received sensor data. While generating the environmental map, the method includes localizing a current position of the robot within the environmental map and, at each corresponding target location of one or more target locations within the environment, recording a respective action for the robot to perform. The method also includes generating a behavior tree for navigating the robot to each corresponding target location and controlling the robot to perform the respective action at each corresponding target location within the environment during a future mission when the current position of the robot within the environmental map reaches the corresponding target location.
-
公开(公告)号:US20240361779A1
公开(公告)日:2024-10-31
申请号:US18640544
申请日:2024-04-19
Applicant: Boston Dynamics, Inc.
Inventor: Brian Todd Dellon , Federico Vicentini , John Frederick Needleman , Leland Hepler , Mario Bollini , David Yann Robert
IPC: G05D1/622 , G05D109/12 , G05D111/10
CPC classification number: G05D1/622 , G05D2109/12 , G05D2111/10
Abstract: Systems and methods are described for outputting light and/or audio using one or more light and/or audio sources of a robot. The light sources may be located on one or more legs of the robot, a bottom portion of the robot, and/or a top portion of the robot. The audio sources may include a speaker and/or an audio resonator. A system can obtain sensor data associated with an environment of the robot. Based on the sensor data, the system can identify an alert. For example, the system can identify an entity based on the sensor data and identify an alert for the entity. The system can instruct an output of light and/or audio indicative of the alert using the one or more light and/or audio sources. The system can adjust parameters of the output based on the sensor data.
-
公开(公告)号:US20240351217A1
公开(公告)日:2024-10-24
申请号:US18761998
申请日:2024-07-02
Applicant: Boston Dynamics, Inc.
Inventor: Mario Bollini , Leland Hepler
IPC: B25J9/16
CPC classification number: B25J9/1697 , B25J9/1664
Abstract: A method includes receiving sensor data for an environment about the robot. The sensor data is captured by one or more sensors of the robot. The method includes detecting one or more objects in the environment using the received sensor data. For each detected object, the method includes authoring an interaction behavior indicating a behavior that the robot is capable of performing with respect to the corresponding detected object. The method also includes augmenting a localization map of the environment to reflect the respective interaction behavior of each detected object.
-
公开(公告)号:US20230418302A1
公开(公告)日:2023-12-28
申请号:US18466535
申请日:2023-09-13
Applicant: Boston Dynamics, Inc.
Inventor: Samuel Seifert , Leland Hepler
IPC: G05D1/02
CPC classification number: G05D1/0221
Abstract: A method for online authoring of robot autonomy applications includes receiving sensor data of an environment about a robot while the robot traverses through the environment. The method also includes generating an environmental map representative of the environment about the robot based on the received sensor data. While generating the environmental map, the method includes localizing a current position of the robot within the environmental map and, at each corresponding target location of one or more target locations within the environment, recording a respective action for the robot to perform. The method also includes generating a behavior tree for navigating the robot to each corresponding target location and controlling the robot to perform the respective action at each corresponding target location within the environment during a future mission when the current position of the robot within the environmental map reaches the corresponding target location.
-
公开(公告)号:US12059814B2
公开(公告)日:2024-08-13
申请号:US17648869
申请日:2022-01-25
Applicant: Boston Dynamics, Inc.
Inventor: Mario Bollini , Leland Hepler
IPC: B25J9/16
CPC classification number: B25J9/1697 , B25J9/1664
Abstract: A method includes receiving sensor data for an environment about the robot. The sensor data is captured by one or more sensors of the robot. The method includes detecting one or more objects in the environment using the received sensor data. For each detected object, the method includes authoring an interaction behavior indicating a behavior that the robot is capable of performing with respect to the corresponding detected object. The method also includes augmenting a localization map of the environment to reflect the respective interaction behavior of each detected object.
-
公开(公告)号:US11797016B2
公开(公告)日:2023-10-24
申请号:US16884954
申请日:2020-05-27
Applicant: Boston Dynamics, Inc.
Inventor: Samuel Seifert , Leland Hepler
IPC: G05D1/02
CPC classification number: G05D1/0221
Abstract: A method for online authoring of robot autonomy applications includes receiving sensor data of an environment about a robot while the robot traverses through the environment. The method also includes generating an environmental map representative of the environment about the robot based on the received sensor data. While generating the environmental map, the method includes localizing a current position of the robot within the environmental map and, at each corresponding target location of one or more target locations within the environment, recording a respective action for the robot to perform. The method also includes generating a behavior tree for navigating the robot to each corresponding target location and controlling the robot to perform the respective action at each corresponding target location within the environment during a future mission when the current position of the robot within the environmental map reaches the corresponding target location.
-
公开(公告)号:US20220260998A1
公开(公告)日:2022-08-18
申请号:US17661685
申请日:2022-05-02
Applicant: Boston Dynamics, Inc.
Inventor: Samuel Seifert , Marco Da Silva , Alexander Rice , Leland Hepler , Mario Bollini , Christopher Bentzel
Abstract: A method tor controlling a robot includes receiving image data from at least one image sensor. The image data corresponds to an environment about the robot. The method also includes executing a graphical user interface configured to display a scene of the environment based on the image data and receive an input indication indicating selection of a pixel location within the scene. The method also includes determining a pointing vector based on the selection of the pixel location. The pointing vector represents a direction of travel for navigating the robot in the environment. The method also includes transmitting a waypoint command to the robot. The waypoint command when received by the robot causes the robot to navigate to a target location. The target location is based on an intersection between the pointing vector and a terrain estimate of the robot.
-
公开(公告)号:US20220241980A1
公开(公告)日:2022-08-04
申请号:US17648869
申请日:2022-01-25
Applicant: Boston Dynamics, Inc.
Inventor: Mario Bollini , Leland Hepler
IPC: B25J9/16
Abstract: A method includes receiving sensor data for an environment about the robot. The sensor data is captured by one or more sensors of the robot. The method includes detecting one or more objects in the environment using the received sensor data. For each detected object, the method includes authoring an interaction behavior indicating a behavior that the robot is capable of performing with respect to the corresponding detected object. The method also includes augmenting a localization map of the environment to reflect the respective interaction behavior of each detected object.
-
-
-
-
-
-
-
-
-