Method for controlling autonomous agents using self-reinforcement
Abstract:
The behavior of automated agents, such as autonomous vehicles, drones, and the like, can be improved by control systems and methods that implement a combination of neighbor following behavior, or neighbor-averaged information transfer, with delayed self-reinforcement by utilizing time-delayed movement data to modify course corrections of each automated agent. Disclosed herein are systems and methods by which a follower agent, or a multiple follower agents in formation with a plurality of automated agents, can be controlled by generating course correction data for each follower agent based on the movement of neighboring agents in formation, and augmenting the course correction data based on time-delayed movement data of the follower agent. The delayed self-reinforcement behavior can (i) increase the information-transfer rate between autonomous agents without requiring an increased, individual update-rate; and (ii) cause superfluid-like information transfer between the autonomous agents, resulting in improvements in formation-keeping performance of the autonomous agents.
Information query
Patent Agency Ranking
0/0