Abstract:
A method for automatic obstacle avoidance of a robot includes: obtaining distance values between the robot and an obstacle detected by sensors arranged on a left side, middle part and right side of the robot respectively; when a minimum distance value detected by the sensors on the middle part is less than a threshold value, if a minimum distance value detected by the sensors on either the left side or the right side exceeds an obstacle critical distance, turning the robot 90 degrees towards the side where the minimum distance value exceeds the obstacle critical distance; when the minimum distance value detected by the sensors on the middle part exceeds the distance threshold value, if only the minimum distance value detected by the sensors on the left side exceeds the obstacle critical distance, turning the robot towards the left side by a first angle value.
Abstract:
A humanoid robot with a body joined to an omnidirectional mobile ground base, equipped with: a body position sensor, a base position sensor and an angular velocity sensor to provide measures, actuators comprising at least 3 wheels located in the omnidirectional mobile base, extractors for converting sensored measures into useful data, a supervisor to calculate position, velocity and acceleration commands from the useful data, means for converting commands into instructions for the actuators, wherein the supervisor comprises: a no-tilt state controller, a tilt state controller and a landing state controller, each controller comprising means for calculating, position, velocity and acceleration commands based on a double point-mass robot model with tilt motion and on a linear model predictive control law, expressed as a quadratic optimization formulation with a weighted sum of objectives, and a set of predefined linear constraints.
Abstract:
Example implementations may relate to methods and systems for disturbing or deceiving sensors of robotic devices. Accordingly, a computing system may detect that a robotic device has entered a particular physical region. Responsively, the computing system may then determine at least one type of sensor that is associated with the robotic device and is used to detect reflected illumination that is reflected from an object. Based on the determined at least one type of sensor, the computing system may then select (i) at least one particular type of disturbing illumination and (ii) a target location within the particular physical region. Upon the selection, the computing system may direct at least one light source to emit the selected at least one particular type of disturbing illumination towards the selected target location so as to disturb the reflected illumination detectable by the robotic device using the at least one type of sensor.
Abstract:
A humanoid robot which can move on its lower limb to execute a trajectory and is capable of detecting intrusion of obstacles in a safety zone defined around its body as a function of its speed is provided. Preferably when the robot executes a predefined trajectory, for instance a part of a choreography, the robot which avoids collision with an obstacle will rejoin its original trajectory after avoidance of the obstacle. Rejoining trajectory and speed of the robot are adapted so that it is resynchronized with the initial trajectory. Advantageously, the speed of the joints of the upper members of the robot is adapted in case the distance with an obstacle decreases below a preset minimum. Also, the joints are stopped in case a collision of the upper members with the obstacle is predicted.
Abstract:
An example implementation includes (i) receiving sensor data that indicates topographical features of an environment in which a robotic device is operating, (ii) processing the sensor data into a topographical map that includes a two-dimensional matrix of discrete cells, the discrete cells indicating sample heights of respective portions of the environment, (iii) determining, for a first foot of the robotic device, a first step path extending from a first lift-off location to a first touch-down location, (iv) identifying, within the topographical map, a first scan patch of cells that encompass the first step path, (v) determining a first high point among the first scan patch of cells; and (vi) during the first step, directing the robotic device to lift the first foot to a first swing height that is higher than the determined first high point.
Abstract:
Landform data showing the shape of a ground surface (4) is acquired. A specification position is specified on the ground surface. A relative angle (432; 442) between the virtual plane (51) and the ground surface (4) in the at least one inspection point (433; 443) is calculated based on the landform data when the virtual plane (51) set with the reference point (423) and the at least one inspection point (433; 443) is virtually arranged such that the reference point (423) overlaps with the specification position, and such that the virtual plane (51) becomes parallel to the ground surface in the specification position. A landform determination value indicating a flatness of the landform is calculated based on the relative angle (432; 442).
Abstract:
A robot and a control method thereof are provided. The method includes the following steps: receiving a manual control command from a remote control device, and accumulating a duration of issuing the manual control commands; estimating an estimated moving velocity corresponding to the manual control command; detecting a surrounding environment of the robot and generating an autonomous navigation command based on the surrounding environment; determining a first weighting value associated with the manual control command based on the duration, the estimated moving velocity and the distance to obstacles; determining a second weighting value associated with the autonomous navigation command based on the first weighting value; linearly combining the manual control command and the autonomous navigation command based on the first weighting value and the second weighting value to generate a moving control command; and moving based on the moving control command.
Abstract:
An example method may include determining a requested yaw for a body of a robot, where the biped robot comprises a foot coupled to the body via a leg. The robot may then detect, via one or more sensors, a yaw rotation of the body with respect to a ground surface, where the foot is in contact with the ground surface. Based on the detected yaw rotation of the body, the robot may determine a measured yaw for the body. The robot may also determine a target yaw for the body, where the target yaw for the body is between the measured yaw for the body and the requested yaw for the body. The robot may then cause the foot to rotate the body to the target yaw for the body.
Abstract:
A control system may receive a first plurality of measurements indicative of respective joint angles corresponding to a plurality of sensors connected to a robot. The robot may include a body and a plurality of jointed limbs connected to the body associated with respective properties. The control system may also receive a body orientation measurement indicative of an orientation of the body of the robot. The control system may further determine a relationship between the first plurality of measurements and the body orientation measurement based on the properties associated with the jointed limbs of the robot. Additionally, the control system may estimate an aggregate orientation of the robot based on the first plurality of measurements, the body orientation measurement, and the determined relationship. Further, the control system may provide instructions to control at least one jointed limb of the robot based on the estimated aggregate orientation of the robot.
Abstract:
A control system for a bipedal humanoid robot that utilizes certain fundamental characteristics of bipedal motion to provide a robust and relatively simple balancing and walking mechanism. The system primarily utilizes the concept of “capturability,” which is defined as the ability of the robot to come to a stop without falling by taking N or fewer steps. This ability is considered crucial to legged locomotion and is a useful, yet not overly restrictive criterion for stability. In the preferred embodiment, the bipedal robot is maintained in a 1-step capturable state. This means that future step-locating and driving decisions are made so that the robot may always be brought to a balanced halt with the taking of one step. Other embodiments maintain the bipedal robot in an N-step capturable state, in which the robot may always be brought to a balanced halt by taking N or fewer steps.