Preventing and adapting to autonomy failures

OVERVIEW: 

How do we plan for failure? Are there new ways that we can design, implement, test or analyze highly autonomous systems that will help us detect potential failure mechanisms and correct them prior to operations?

 Space is hard, and the tragic failures of automation/autonomy are monuments to painful lessons learned for the space community. This is a hard problem in every domain, but exacerbated in space. This is due to not being able to easily access the platform. Contrast this with a car, you get indicator lights if there are engine issues or you need an oil change, but you are able to physically access the guts of the system to repair it. In space, when something breaks, it is a non-trivial effort to repair or “work around” it. 

In the case of last year’s Boeing Starliner, two automation errors prevented the spacecraft from docking with the International Space Station (ISS) and would have prevented the safe jettison of the service module if not for ground intervention.

 We are looking for technologies that can solve part or all of the following questions:

  • Can we prevent such failures of autonomy in the future?
  • What if sensors or data streams present unexpected/incorrect information?
  • How might heterogeneous redundant sensors aid in this?
  • How do the physical aspects of cyber-physical systems like inertia aid in detection and recovery approaches?
  • Will an autonomous agent detect and react to such anomalies in a predictable manner?
  • Can we trust autonomous processes to operate through or safe systems from failure?
  • Could future space missions attempt to recover from a failure or potential failure autonomously? If so, how can we assure such autonomous recovery features will work as intended/expected?
  • Is it even possible to trust autonomy to recover from a failure?
  • Could an autonomous supervisory agent outperform a dedicated human operator?
  • Safe Shutdowns & Handovers (If the AI stops/is told to stop, can we safely transition the task to a human?)  
  • Interpretability/auditability/traceability (If the autonomous system does something that looks wrong in retrospect, can we figure out what went wrong?) 
  • Can autonomy gracefully degrade its operations when an unrecoverable failure occurs?

  

  1.     S. Croomes et al., “Nesc review of demonstration of autonomous rendezvous technology (dart) mission mishap investigation board review (mib) final report,” NASA, Tech. Rep. Engineering and Safety Center Report RP-06-119, December 2006. [Online]. Available: http://www.spaceref.com/news/viewsr.html?pid=20605
  2.     Sheetz, Michael, “Boeing Starliner fails key NASA mission as autonomous flight system malfunctions,” CNBC, 20 Dec 2019 [Online].  Available: https://www.cnbc.com/2019/12/20/boeings-starliner-flies-into-wrong-orbit-jeopardizing-trip-to-the-international-space-station.html, accessed 2 Aug 2020.
  3.     Patel, Neel V., “Are we making spacecraft too autonomous?,”MIT Technology Review, 3 Jul 2020 [Online], Available: https://www.technologyreview.com/2020/07/03/1004788/spacecraft-spacefight-autonomous-software-ai/, accessed 2 Aug 2020.

Ready to learn more?

Sign up now to be invited to the webinar series where we’ll discuss the context behind each problem statement and answer questions from startup and university teams.