Addressing Assurance Challenges in Space Autonomy

IEEE Space Mission Challenges for Information Technology - IEEE Space Computing Conference
Caltech, Pasadena, CA, USA - 18-21 July 2023


This workshop aims to gather renowned specialists in the field of autonomous systems for space missions, as well as other relevant application domains like automotive, aviation, and defense. Together, they will explore the necessity of autonomy in both present and future space missions, while also examining the approaches employed by different industries in developing, verifying, and implementing capabilities that facilitate autonomous operations.

The primary objective of this workshop is to identify successful techniques utilized in various application domains that can be adapted and implemented within the space science context. Furthermore, we aim to pinpoint the existing research gaps that demand attention in order to engineer highly reliable autonomous systems for space applications. By sharing experiences and insights, we seek to bridge the knowledge gap and foster collaboration towards achieving this common goal.


Tuesday, July 18, 2023 (Baxter Classroom)

10:15 AM Autonomy for Space Robots: Past, Present, and Future Issa Nesnas and Michel Ingham
10:45 AM TBA Danette Allen
11:15 AM Increasingly Autonomous Perception and Decision Systems for Advanced Air Mobility Ella Atkins
1:30 PM TBA Trung Pham
2:00 PM So you want to put a neural network in an airplane... Are you crazy? Darren Cofer
2:30 PM Assurance of Learning-enabled Autonomous Systems Sandeep Neema
3:00 PM Operational Test and Evaluation for Safety-Critical Autonomous Systems: Progress, Challenges, and Opportunities Richard Murray
3:45 PM Towards reliable learning-enabled autonomy throughout its operational lifecycle Rohan Sinha
4:15 PM Closing Remarks All

Thursday, July 20, 2023 (Baxter Classroom)

10:15 AM Summary of discussion points Workshop Organizers
10:45 AM Panel discussion: Solved, within reach, and long term challenges All presenters
1:00 PM Definition of a roadmap to address trust challenges in space autonomy All
2:30 PM Closing Remarks and action items All

Talks and Speakers

Autonomy for Space Robots: Past, Present, and Future

Issa Nesnas, Robotic Technologist, NASA Jet Propulsion Laboratory
Michel Ingham, Chief Technologist, Systems Engineering Division at NASA Jet Propulsion Laboratory

Abstract. Over the past two decades, several autonomous functions and system-level capabilities have been demonstrated and used in deep-space operations. In spite of that, spacecraft today remain largely reliant on ground in the loop to assess situations and plan next actions, using pre-scripted command sequences. Advances have been made across mission phases including spacecraft navigation; proximity operations; entry, descent, and landing; surface mobility and manipulation; and data handling. But past successful practices may not be sustainable for future exploration. The ability of ground operators to predict the outcome of their plans seriously diminishes when platforms physically interact with planetary bodies, as has been experienced in two decades of Mars surface operations. This results from uncertainties that arise due to limited knowledge, complex physical interaction with the environment, and limitations of associated models. In this talk, we will reflect on past, current, and future drivers for autonomy, highlighting the recent advances and impact of some autonomous capabilities in planetary missions. We will also share a summary of the projected autonomy needs in the recommended mission concepts of the Planetary Science and Astrobiology Decadal Survey and highlight some of the anticipated challenges.
Bio. Dr. Issa Nesnas is a principal technologist in the Autonomous Systems Division at the Jet Propulsion Laboratory, where he worked for over 25 years after several years in the robotics industry. At JPL, he led the Robotics Mobility and the Robotics Software Systems Groups across a span of thirteen years. His research included architectures for autonomous systems, perception-based navigation and manipulation, and extreme-terrain and microgravity mobility. He has served in multiple roles on three JPL rover missions. He is currently the associate director of Caltech’s CAST (Center for Autonomous Systems and Technologies and JPL’s lead on NASA’s Capability Leadership Team for Autonomous Systems. He is the recipient of the Magellan Award, JPL’s highest award for an individual scientific or technical accomplishment.


Danette Allen

Increasingly Autonomous Perception and Decision Systems for Advanced Air Mobility

Ella Atkins, Fred D. Durham Chair in Engineering and Department Head, Department of Aerospace and Ocean Engineering, Virginia Tech

Abstract. Advanced Air Mobility (AAM) including passenger transport and Uncrewed Aircraft Systems (UAS) requires autonomy capable of safely managing contingency responses as well as routine flight. This talk will describe pathways from aviation today to a fully autonomous AAM of the future. Research toward comprehensive low-altitude flight environment mapping will be summarized. Assured Contingency Landing Management (ACLM) requires a pipeline in which hazards/failures that risk loss of vehicle controllability or loss of landing site reachability trigger contingency response. Pre-flight preparation of contingency landing plans to prepared landing sites is supplemented by online planning when necessary. Dynamic airspace geofencing in support of UAS Traffic Management (UTM) will be defined and compared with traditional fixed airspace corridor solutions. The talk will conclude with a high-level mapping of presented aviation solutions to space applications.
Bio. Dr. Ella Atkins is Fred D. Durham Professor and Head of the Kevin T. Crofton Aerospace and Ocean Engineering Department at Virginia Tech. She was previously a Professor in the University of Michigan’s Aerospace Engineering Department and Robotics Institute. Dr. Atkins holds B.S. and M.S. degrees in Aeronautics and Astronautics from MIT and M.S. and Ph.D. degrees in Computer Science and Engineering from the University of Michigan. She is an AIAA Fellow and private pilot. She served on the National Academy’s Aeronautics and Space Engineering Board and has authored over 230 refereed journal and conference papers. Dr. Atkins has pursued research in AI-enabled autonomy and control to support resilience and contingency management in manned and unmanned Aerospace applications. She is Editor-in-Chief of the AIAA Journal of Aerospace Information Systems (JAIS) and a member of the Flight Safety Foundation's Autonomous and Remotely Piloted Aviation Systems Advisory Committee (ARPAC).


Trung Pham, FAA’s Chief Scientific and Technical Advisor (CSTA) for Artificial Intelligence (AI) and Machine Learning

So you want to put a neural network in an airplane... Are you crazy?

Darren Cofer, Senior Fellow, Collins Aerospace
Abstract. Machine learning (ML) technologies are being investigated for use in the embedded software for manned and unmanned aircraft. ML will be needed to implement advanced functionality for increasingly autonomous aircraft and can also be used to reduce computational resources (memory, CPU cycles) in embedded systems. However, ML implementations such as neural networks are not amenable to verification and certification using current tools and processes. This talk will discuss current efforts to address the gaps and barriers to certification of ML for use onboard aircraft. We will discuss new verification and assurance technologies being developed for neural networks. This includes formal methods analysis tools, new testing methods and coverage metrics, and architectural mitigation strategies, with the goal of enabling autonomous systems containing neural networks to be safely deployed in critical environments. We will also discuss the new certification guidance that is under development to address the gaps in current processes. The overall strategy is to start will approvals of low-complexity and low-criticality applications, and gradually expand to include more complex and critical applications that involve perception.

Assurance of Learning-enabled Autonomous Systems

Sandeep Neema
Professor of Computer Science,
Professor of Electrical and Computer Engineering
Chair, Executive Council, Institute of Software Integrated Systems

Abstract. Significant advances have been made in the last decade in constructing autonomous systems, as evidenced by the proliferation of a variety of unmanned vehicles. These advances have been driven by innovations in several areas, including sensing and actuation, computing, modeling and simulation, but most importantly deep machine learning, which is increasingly being adopted for real-world autonomy. In spite of these advances, deployment and broader adoption of learning techniques in safety-critical applications remain challenging. This talk will present some of the challenges posed by the use of these techniques towards assurance of system behavior, and summarize advances made in DARPA’s Assured Autonomy towards establishing trustworthiness at the design stage and providing resilience to the unforeseeable yet inevitable variations encountered during the operation stage. The talk will also discuss related work in creating frameworks for assurance driven software development.
Bio. Dr. Sandeep Neema is a Professor of Computer Science at Vanderbilt University since August 2020. He also holds courtesy appointment as Professor of Electrical and Computer Engineering at Vanderbilt University. He was a Program manager at DARPA’s Information Innovation Office (I2O) from July 2016 till September 2022. In his tenure at DARPA he conceived, developed, and managed influential programs at the intersection of Artificial Intelligence and Cyber Physical Systems, that included programs such as Assured Autonomy, Symbiotic Design of Cyber Physical Systems, and Assured Neurosymbolic Learning and Reasoning. His research interests include Cyber Physical Systems, Model-based Design Methodologies, Artificial Intelligence and Machine Learning, and Distributed Real-time Systems. Dr. Neema has authored and co-authored more than 100 peer-reviewed conference, journal publications, and book chapters. Dr. Neema holds a Doctorate in Electrical Engineering and Computer Science from Vanderbilt University, and a Master’s in Electrical Engineering from Utah State University. He earned a Bachelor of Technology degree in Electrical Engineering from the Indian Institute of Technology, New Delhi, India.

Operational Test and Evaluation for Safety-Critical Autonomous Systems: Progress, Challenges, and Opportunities

Richard Murray, Thomas E. and Doris Everhart Professor of Control and Dynamical Systems and Bioengineering, California Institute of Technology
Abstract. Safety certification of autonomous vehicles is a major challenge due to the complexity of the environments in which they are intended to operate. In this talk I will discuss recent work in establishing the mathematical and algorithmic foundations of test and evaluation by combining advances in formal methods for specification and verification of reactive, distributed systems with algorithmic design of multi-agent test scenarios, and algorithmic evaluation of test results. Building on previous results in synthesis of formal contracts for performance of agents and subsystems, we are creating a mathematical framework for specifying the desired characteristics of multi-agent systems involving cooperative, adversarial, and adaptive interactions, develop algorithms for verification and validation (V&V) as well as test and evaluation (T&E) of the specifications, and perform proof-of-concept implementations that demonstrate the use of formal methods for V&V and T&E of autonomous systems. These results provide more systematic methods for describing the desired properties of autonomous systems in complex environments and new algorithms for verification of system-level designs against those properties, synthesis of test plans, and analysis of test results.
Bio. Richard M. Murray received the B.S. degree in Electrical Engineering from California Institute of Technology in 1985 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 1988 and 1991, respectively. He is currently the Thomas E. and Doris Everhart Professor of Control & Dynamical Systems and Bioengineering at Caltech. Murray's research is in the application of feedback and control to networked systems, with applications in biology and autonomy. Current projects include analysis and design of biomolecular feedback circuits, synthesis of discrete decision-making protocols for reactive systems, and design of highly resilient architectures for autonomous systems.

Towards reliable learning-enabled autonomy throughout its operational lifecycle

Rohan Sinha, Graduate Student in the Department of Aeronautics and Astronautics, Stanford University

Abstract. Machine learning (ML) systems for perception, prediction, and decision-making have enabled tremendous advances in autonomy. However, ML models can behave unreliably on so-called out-of-distribution (OOD) inputs – data that is dissimilar from the training data. We must acknowledge these inevitable shortcomings to build reliable autonomous robots for the real world, where distributions subtly shift from training data, and where we will always discover corner cases and failure scenarios not represented at design time. We will present our work towards addressing the negative consequences of OOD data in robotics at three timescales crucial to deploying reliable open-world autonomy: (1) Real-time decision-making, where we aim to develop runtime monitors that detect when a learning-enabled system is unreliable. (2) Episodic interactions, where our goal is to avoid system failures when ML components degrade during a deployment. (3) The data lifecycle as learning-enabled robots are deployed, evaluated, and retrained, where the core challenge is how to improve performance over repeated deployment most efficiently. Taking a system-level view to address these challenges, we will first present the use of large language models to detect semantic failures that defy component-level blame assignments. Second, we will cover contingency planning strategies to enact safety-preserving interventions, thereby closing the loop on the runtime monitor. Finally, we will show the utility of the runtime monitor to inform model retraining to address observed failure modes over time.
Bio. Rohan is a PhD candidate in the department of Aeronautics and Astronautics at Stanford University, where he is a member of the Autonomous Systems Lab advised by Prof. Marco Pavone. His research focuses on developing methodologies that improve the reliability of ML-enabled robotic systems, particularly when these systems encounter out-of-distribution conditions with respect to their training data. Broadly, his research interests lie at the intersection of control theory, machine learning, and applied robotics. Previously, he received bachelor’s degrees in Mechanical Engineering and Computer Science from the University of California, Berkeley with honors and a distinction in general scholarship. As an undergraduate, Rohan worked on data-driven predictive control under Prof. Francesco Borrelli in the Model Predictive Control Lab and on learning control algorithms that rely on vision systems under Prof. Benjamin Recht in the Berkeley Artificial Intelligence Lab. He has also interned as an autonomous driving engineer at Delphi (now Motional) and as a software engineer at Amazon.






  • Alpha Data

  • AMD

  • Aril Inc

  • Avalanche Technology

  • LeWiz Communications

  • Mirabilis Design

  • SiFive

  • STAR-Dundee

  • Synopsys

  • Teledyne Technologies



  • Frontgrade

  • KBR Inc


If you have any questions, feel free to contact us at:

Copyright © IEEE SMC-IT/SCC 2022-2023 - Privacy Policy