Learning and Control in Autonomous Systems
An autonomous system’s ability to learn and execute decisions in an uncertain environment involves careful consideration of its dynamic model. Gathering relevant information, processing it, and updating the model under a dynamically changing environment may cause the agent to deviate from its nominal behavior. Our research is aimed at investigating fundamental aspects in learning and control for an autonomous system, with an emphasis on learning in complex large-scale interconnected systems under safety and resource constraints.
Multimodal Intelligent Autonomy: Leveraging Domain Expertise in Data-driven Motion Planning and Control
Model-based strategies for autonomous vehicles are designed under the assumption that the model is accurately known. While this is seldom true in practice, one can benefit if onboard measurements, real-time/experimental data, preferences, task specifications, or demonstrations are available. Unlike obtaining suitable training networks and optimizing the network parameters to map the sensor-level inputs to the vehicle's low-level control without any domain knowledge, our research focuses on complementing the control-theoretic algorithms for aerospace systems with active learning-based strategies to potentially provide performance improvements under the presence of uncertainties and imperfections while ensuring robust stability and provable guarantees.
Selected Publications:
- Abhinav Sinha, Devin White, and Yongcan Cao, "Deep Reinforcement Learning-based Optimal Time-constrained Intercept Guidance", AIAA SciTech 2024 Forum, pp. 1-16, 2024.
- Umer Siddique, Abhinav Sinha, and Yongcan Cao, "On Deep Reinforcement Learning for Target Capture Autonomous Guidance", AIAA SciTech 2024 Forum, pp. 1-16, 2024.
Robust Learning and Control under Resource and Safety Constraints
Our research focuses on developing advanced computational methodologies and algorithms to enable intelligent systems to effectively learn, optimize, and operate within limited resources while ensuring safety and robust performance. In particular, our research addresses modeling and characterizing resource constraints, developing adaptive learning algorithms that balance exploration and exploitation, integrating safety considerations into control frameworks to enforce safety boundaries, and designing robust control strategies to handle uncertainties and disturbances.
Selected Publications:
Distributed Reinforcement Learning for Networked Multiagent Systems
In a swarm of autonomous vehicles, there are multiple simultaneous decision-makers that affect their overall shared performance during a mission. The goal of this line of research is to leverage the data-based aspects to develop robust distributed learning and control strategies that help collectively maximize certain objectives for the desired collective performance.
Selected Publications: